Friday, March 30, 2012

VS C# Cheat

To change the target platform, manually edit the platform tag in the *.csproj project file. The platform tag can be titanium, anycpu (default), x86 & x64.

[UPDATE 2014-04-27] This is unnecessary for VS2010, since you can use in the Configuration Manager to add the x64 or whatever target you want. See How to: Configure Visual C++ Projects to Target 64-Bit Platforms on MSDN.

Thursday, March 29, 2012

hg-git and TortoiseHg

[UPDATE 2013-05-28] Make sure your hg-git is up to date and pointing at the most recent tag. I have had issues when I have updated hg but not pulled the latest hg-git changes from durin42.

If for some reason you were using Mercurial, aka Hg, and you needed to interact with Git, then you probably have heard about hg-git. There are other tools, but I think this one is the oldest, and even though not necessarily the best, it works fine. Now if you are on Windows, then you are most likely using TortoiseHg, aka thg, and if so integrating hg-git is a snap. The only confusion is that there are multiple website with seemingly different information. If you already have TortoiseHg installed, all you have to do is follow the directions in the TortoiseHg v2.1.1 Documentation: Section 9.3: Use with other VCS systems: hg-git (git). There are some key differences between this documentation and what you will find on various other websites which are geared toward non-TortoiseHg users. In particular, hg-git requires Dulwich, a python-git implementation, but note what it says in the thg docs:
Current versions of TortoiseHg include dulwich version 0.7.0. We recommend to use hg-git version 0.2.6 with this version of dulwich and Mercurial 1.8.
So you don't need to install Dulwich if you are using hg-git with TortoiseHg! This is intended for non-thg users.

Now on to use:
  1. Make sure that you have enabled the plugin by adding the [extension] line hggit = C:\hg-git\hggit to your mercurial.ini file, which can be found in your user-profile folder (the one with your Windows user name). This is described in all of the websites, but most of them use the Linux/Unix home folder ~/.hgrc which you will NOT find on a windows machine unless you are using Cygwin or minGW/MSYS. For thg installations of Mercurial, the %USERPROFILE%\mercurial.ini file is the equivalent of the Linux ~/.hgrc file.
  2. Branches should to bookmarked before you can push them to a git repo. Basically this is how hg-git deals with branches. A Git branch is the equivalent of an hg bookmark. This is because branches in Mercurial and Git are different. You can do this manually by issuing the command hg bookmark -r default master. That basically maps the hg branch called default to the git branch called master, by creating an hg bookmark also called master. The -r allows you to specify the revision (could be an hg branch, bookmark or tag), instead of pointing to the current working copy. For example if you have an hg branch named x64 if you want to push it to a Git repo, make a bookmark, hg bookmark -r x64 x64_git; note that in Mercurial branches and bookmarks cannot have the same name. There is an hg-git extension you can add that will remove the suffix, but a better practice might be to use bookmarks instead of named branches in Mercurial.
  3. Finally to push to your git repo use hg push git+ssh://git@github/user/repo. Note that protocol "git+ssh://" and the switch from colon to slash between github and user! That tells hg-git to push to a git repo using ssh. Also you will need to enable Pageant, you can find it in the thg folder in program files. You will need an ssh key. If you don't already have one, you can download puttygen and follow the directions on Bitbucket: Using the SSH protocol.
That's it. You don't need to enable the bookmarks extension, because it became part of Mercurial core 1.8. There is an issue with author, you want to make sure that you are commiting with the same username. You can set that in either mercurial.ini globally or in .hg/hgrc per repo. Alternately select thg settings and commit. But not setting this is NOT a show stopper.

Package Predicament Part 3: Python, the final installment

[WARNING: Outdated Material] This post is over a year old, and consequently the ideas, opinions and facts may no longer be relevant or accurate (assuming there were actually facts in this post). Please proceed with caution. You have been warned.

So it turns out that there is a long history between Python and Linux (*). Debian has an official Python Package Policy. This is because Python is an integral part of Linux; Linux libraries depend on Python and Python packages. If you go moving them around, you will break your Linux installation.

One of the main differences you will see are the /lib/site-packages folder is renamed /lib/dist-packages on Linux (* just Ubuntu, see CORRECTION), and the existence of /usr/share/pyshared and /usr/lib/pymodules on Linux as well. I'm not going to pretend I understand everything that's going on, but in a nutshell these are Python modules that Linux is using, for something.

Lucky for you, distribute, setuptools, distutils and pip are tailored for Ubuntu so that they will install packages in the correct locations, and also know where to find them. So in general this supports the argument that you should look for your Python packages from the distro repo, especially pip, distribute, setuptools, distutil, and virtualenv. If you install these from your distro repo, you should be OK (**). And above all, "for the love of Guido," use pip, never easy_install!

[CORRECTION: 2012-04-11] (*) After several adventures with VMs (Fedora16OpenSUSE 12 and even FreeBSD), I have done some experimentation, and some of these Python-peculiarities are only on Ubuntu/Debian distros. For example, there is no dist-packages folder on either OpenSUSE or Fedora. They both use the traditional Python file structure and name conventions such as site-packages. On Fedora it looks like your packages will go in /usr/lib/Python2.7/site-packages, whether you install them with $ sudo yum install python-package or $ sudo pip-python package. Note: on Fedora pip is pip-python, unless you are using it from a virtualenv, then it's just pip. There's nothing in the /usr/local/lib folder on Fedora remotely related to Python, nor /usr/local/share. I didn't get a chance to look for pyshared or pymodules, which are both in /usr/lib on Ubuntu 11.10, but I did notice that there is a lib-dynload in /python2.7.

So what does this all mean? Well, I tried to install numpy with pip in a virtualenv on Fedora, and it still failed miserably (***), see Package Predicament, Part 2. I've research it a little, there are some old bug reports, and several SO questions, but no good answer. I did not try to install it in the base system, but I'm believe it would fail there, just like it did on my Oneiric Ocelot. I think it's a problem with pip, not virtualenv, and I guess the Numpy setup.py or egg. BTW: it also fails on Windows, no surprise; where's my libgcc? Sorry I don't have the complete MinGW installation, only MsysGit. But of course I can install it using the *.exe all-in-one isntaller from Numpy/Scipy website just fine.

[UPDATE: 2013-01-16] (***) Duh, I needed the dev packages, _obviously_. Do yum/apt-get/zypper install libatlas-dev, ... and make sure you have gfortran. Numpy, SciPy and Matplotlib all build fine with pip in a virtualenv, once you have the proper dev files. Now Windows is an entirely different story, but it can also be done.

[UPDATE: 2012-04-17] (**) You can seriously f*** up your sh** if you start messing around with your distro's version of distribute, distutils or setuptools. For example, Unity's Software Center depends on these to install packages. If you screw up your version of setuptools, Software Center won't even open. My advice is to (1) use virtualenv for any package that you need that differs from your repo's version. For example python-requests is version 0.5 in Oneiric Ocelot, but newest version of Requests is (as of now) 11.1, so you should create a virtualenv and install it there. (2) Only pip Python packages that are pure Python, that are not in your repo, and do not let them install dependencies that are already installed by your repo. For example, Requests now requires chardet >=1.0.0, which because Ubuntu's version of chardet is named oddly 2.0.1-2 causes pip to replace it with the exact same code. Probably fine, but not smooth. (3) If your desired package has dependencies that already exist in the repo, then use virtualenv and install them there. (4) Put a simlink to packages that require compiled code, such as Numpy in your virtualenv site-packages folder.

Tuesday, March 27, 2012

Package predicament or distribution dilemma, part 2

[UPDATE 2013-05-01]
Another post so hopelessly outdated, that it's almost better off just deleting. On the flip side it's instructive to relive, however painful  my learning process. So I addressed the issue of building Numpy from source using Pip in both the system (local) environment and in a virtualenv (always recommended) in an update to the next and final installment of this 3-part post: Package Predicament Part 3: Python, the final installment, and in there I also link to the solution: Building numpy, scipy, matplotlib and PIL in virtualev on both windows and linux. In a nutshell my intuition in (a) below was correct. In particular Numpy requires both C and Fortran compilers, and the source for its dependencies, or at least their headers, specifically BLAS and LAPACK. However the issue of building Numpy was peripheral to the real question in this series of where and when to deviate from the Ubuntu package repository. My answers, now, would be as follows:
  • If the package is not in the repository, consider a ppa (personal package archive) such as this one for Sublime Text 2 by W8, or if possible drop a prebuilt binary installed ~/opt and add a symlink to it from ~/bin as I described here in this post: Install Add On Software and Create GTK+ Desktop File.
  • If the package is in the repository, but you want a different version then if the package is a python or ruby package consider using a virtualenv or rvm. This protects your system environment from differently named or different versions and possibly conflicting dependencies. For example chardet is a Requests dependency, but in ubuntu it's named something different than it is on Pypi, causing it to be installed twice and confuse everyone. Placing your off-repo version of requests in a virtualenv protects your system and avoids the conflict. Ubuntu does have a place to put your system python files, but I would almost never use it. For packages that are add-ons, I would do the same thing I did for eclipse which I described in this post: Eclipse in ubuntu.
Stick this these rules and you will always be happy! I promise!
So here's the flip. Just for kicks I tried to update numpy, stupid because (a) it includes a lot of binaries that need to be compiled, not just py code and (b) it's so mature what could possibly be in 1.6.1 that's not in 1.5.9?

So pip scream loudly on and on about this not being where it looked and this not compiling and finally, installation failed. Luckily for me it underwent all this trauma in a temp file called ~/build so my system was never altered.

Made me think though... maybe I should try some of these "experiments" on a vm so my real system won't get trashed.

Sunday, March 25, 2012

distribution dilemma or package predicament

I recently installed Requests, a Python package for HTTP requests, from the Ubuntu Software Center since that's become my habit, mostly because I foolishly believe that it will handle dependencies (if there actually are any) for me properly, and because I'm too scarred or lazy to handle the sometimes recursive  requirements an installation involves on my own. The downside of this is of course that packages in the Ubuntu repository (some are on Launchad) are maintained by volunteers, not always updated and, by their very nature of being Ubuntu, will always be at least 6 months old. In the case of Requests, the version on Ubuntu and Launchpad is version 0.5.0, there's an archived page on Github, and compared to the current version 0.10.7, you can see it has come a very long way! Lamely, the next release of Ubuntu, the Precise Pangolin will have Requests 0.8.2, also archived on Github.

This is the distribution dilemma, go with the distro, old but reliable (is it really?), or go it alone. The Python hacks will say the obvious, use pip, also available on Ubuntu, not nearly as outdated, which is Python's answer to apt-get/yum, i.e.: it will take care of installing dependencies for lazy, insecure me.

What do you think? When should you break from the fold? When do you stay in the warm embrace of your distro? Obviously the answer will be different for each user and circumstance. This time, I think I'm going have to buck up and jump out of the nest.

(*) Yay! I found a happy medium. I must say pip kicks ass!
marko@myBox:~$ pip freeze
Shows me all of my installed Python packages, and there's requests and it sorry outdated state.
Now ...
marko@myBox:~$ pip install --upgrade requests
Oh that didn't work, it couldn't uninstall the old requests because permission was denied.
So try ...
marko@myBox:~$ sudo pip install --upgrade requests
Ahh, success! I love linux! And I love the smartypants who wrote Pip! Thank you!

Update (2012-04-17): (*) Well I probably won't do that again. Please see Part 3. This approach works sometimes but can cause problems, and probably isn't optimal. My initial instincts, scared and lazy, were probably best. In the future, I'll only use pip in virtualenv, with the added plus, that I don't have to remember to precede it with sudo!

Saturday, March 24, 2012

Friday, March 23, 2012

Steamy

It may not seem that hot, but steam tables are essential for modeling Rankine cycles like steam turbines and the boilers that provide steam like solar thermal. But properties are not enough, you need thermodynamic derivatives as well. The IAPWS provides correlations for water and steam properties and derivatives which I have coded in MATLAB for your use. You can find them on Github.

Tuesday, March 20, 2012

Virtually vapid

[UPDATED 2016-02-23] Now I'm swinging back to VirtualBox on both systems as the overall King of VM. Unfortunately VMWare Player 7 on my Windows machine was freezing up randomly and at one point just refused to work. It turns out that there have been a lot of changes with VirtualBox in the last 4 years, and now not only does VirutalBox include hardware virtualization but it also has valid certs for Windows. Previously there were some scary messages along the lines of, "use at your own risk," which obviously goes without saying so ... why were they saying it? Anyway, the newest VirtualBox 5 is super fast and so far stable. In addition to these obvious performance perks, the fact that VirtualBox is really free, as in FOSS, means that there are tons of great features like cloning and snapshots, and the license allows commercial use with no restrictions. The VMWare player restricts VMWare Player to non-commercial use only, and it was a free lite version of the more powerful VMWare Workstation, so lite on features. So, for now, I'm really happy!

I did it. I installed a Windows VM on Ubuntu using VirtualBox and Ubuntu VM on Windows using VMware 3.1.5 (the PC was too old for 4.0.2). VirtualBox was a breeze and fast, but VMware and Ubuntu took forever (*). You have to install VMware Tools (**) which takes like a million years. It's probably my ancient laptop and the out-of-date VM, but that was the best I could do for now. What's the point? There is no point, it's just cool.
P.S. 99% of problems are solved by cycling vm, shut it down; not sleep.
P.P.S. "user$ sudo ./vmware-install.pl"  (**) since you need root access and your extracted vmware-tools folder is not on your path. "user$ sudo ./bin/vmware-uninstall.pl" to remove.
Update (2012-03-21): (**) Preferred method is to use open-vm-tools from package manager since it updates with your Linux distro. Use apt-get, Synaptic or Ubuntu Software Center. In other words do not use VMware Tools.
Update (2012-03-22): After installing your guest on VirtualBox VM, install guest-additions, to get video, mouse and other drivers. If you're on Ubuntu, use the iso image in the Software Center.
Update (2012-03-23): (*) OK, I've changed my opinion a little bit, after spending an embarrassing number of hours updating from XP to SP3 for the guest VM on the Linux host. What I'm getting at is that installing the guest Linux VM was a lot faster, even though the initial XP install was pretty fast, getting the updates is painfully slow. This is a good time to make a bullet list of lessons learned:

  1. VMware >3 only works on machines with Hardware Assisted Virtualization like Intel VT-x.
  2. VirtualBox works on anything.
  3. If installing a guest Linux VM using VMware, do not download VMware Tools from the internet. Instead use package manager (apt-get, Synaptic or Ubuntu Software Center) to install open-vm-tools package.
  4. If installing VirtualBox on Linux host, use package manager and install virtualbox-qt and virtualbox-guest-additions-iso (which is the VirtualBox equivalent of VMware Tools).
  5. For best performance, install on a system with at least 2GB of RAM and >2 core processor. Although it will work on a single core with 1GB, you noticeably see performance suffer, and you make experience system hangs or BSOD.
  6. If installing a guest Windows XP VM, turn off automatic updates and instead use Windows XP Service Pack 2 Network Installation Package for IT Professionals and Developers followed by Windows XP Service Pack 3 Network Installation Package for IT Professionals and Developers. Otherwise you will spend a very long time waiting for downloads and installing them.
  7. Do not overtax your host machine during critical guest installations, e.g. installing OS. Think of rooting your phone - although the stakes are not as high, you want everything to go alright so you don't have to repeat the entire process on a new VM. It's also a good idea to kill your screensaver/lock/sleep feature on your host during crucial installs, otherwise your hard-drive might turn off.
  8. After installing guest OS, don't forget to run either open-vm-tools for VMware from the package manager or virtual-box-guest-additions-iso for VirtualBox from the devices menus option on your VirtualBox VM after you've downloaded it package manager.

xz for windows

xz is a compression tool like gz, bzip2, tar, rar, 7z & zip, to name a few. Download the windows binaries from the link above and add it to your windows path (*). Done. If you have a newish version of tar you can extract a tar.xz file with the command "tar -Jxvf filename.tar.xz". Works in dos and bash. Note: xz comes included in default Cygwin or Ubuntu installations. I don't know about minGW - msysGit did not.
(*) Adding a folder your windows path is easiest by right-clicking on your My Computer desktop icon, selecting properties and the advanced tab, then pressing the environment variables button. Select path and edit. Put the full path to your xz bin folder (c:\xz-5.0.3 for me since I copied the extracted folder there) followed by a semicolon at the beginning of the current path. Click ok, ok, ok and you're done. You can see your other in dos by typing "> echo %PATH%" and in bash it's "$ echo $PATH".

Cygwin Ports

Updated (2012-03-19): Okay, I apologize for using Poquito Picante as a clipboard, but when I found Cygwin Ports I was super excited, especially when I saw Meld (see Meld Magic) and GtkSourceview which is used by Meld. I had been trying to build Meld for Cygwin, and in general trying to figure out what the value of Cygwin is in the age of virtualization and Ubuntu/Fedora/openSUSE and other pop Linux distros.
But, although Cygwin Ports has amassed a large number of ports (awesome!), mostly GNOMEGTK+ & KDE, I personally couldn't deal with the terribly slow download speed from their ftp (maybe another mirror is faster?). More than once it timed out. Although they've added a blog and their code is in git on sourceforge, the mailing list for issues is not searchable, there is no issue tracker and IMHO support was generally lacking (sorry!).
So I went back to stock Cygwin, and lo and behold, magically I was able to build meld all on my own. See Building Meld for Cygwin.

Building Meld for Cygwin

This was harder than I thought, but by the end I felt pretty silly. Oh well here it goes:
  • Download and run the setup.exe from Cygwin. If you already have Cygwin you can skip this step.
  • In addition to the Cygwin defaults, select X11 and Gnome. This should install "make" and some other packages you'll need to build Meld, that are not installed by default. I think Cygwin installs Python 2.6.7 by default (they are not up to 2.7.2 yet) but if you want to be extra careful expand the Python packages and make sure it is installed (should say 2.6.7 and/or keep).
  • Download the Meld source, copy to your ~/downloads folder and run ...
    • user$ tar -Jxvf filename
  • (see man tar). I personally like to keep all my Cygwin downloads in my Cygwin folders for easier navigation, otherwise you have to prefix cygdrive to all of your windows paths.
  • Navigate to the extracted folder and run ...
    • user$ make install
  • Hopefully you won't have any errors.
  • Start Cygwin/X - the easiest way is from the windows start menu in all programs. You should find Cygwin/X and Xserver. You'll know it's running when a window opens and you see the X in your system tray. To start Xserver from the Cygwin bash run ...
    • user$ startxwin
    • user$ startx
  • In the X window type meld, and if you're lucky it should run!

  • Another useful online resource is CyGNOME.
So now you have a successful Meld build in Cygwin. But you could have easily done this on Windows without Cygwin or on a Linux machine or Mac with zero effort.

For Meld on Windows see:

Tuesday, March 13, 2012

Bad Github api v2

Python basic authentication using urllib2.HTTPBasicAuthHandler() did not work for me on the Github api v2. On the users page I finally found an easy alternate authentication method:

http://github.com/api/v2/json/user/show?login=defunkt&token=XXX

Thank you very much Dustin for doing such a great job on py-github, in which deeply buried I found this gem.

I wonder if this would have been harder or easier in Java?

update (2012-03-13): Adding a header with the key "Authentication" and the value "Basic username:password" encoded with base64 also works, but why?

>>> import base64
>>> import urllib2
>>> url = "http://your.url.com"
>>> user = "your-username"
>>> token = "your-password"
>>> req = urllib2.Request(url)
>>> req.add_header('Authorization'
...   'Basic ' + base64.b64encode("%s:%s" %(user + '/token',
...   token)).strip()
>>> data = urllib2.urlopen(req).read()

This makes me think that HTTPBasicAuthHandler should work, at least according to this site on basic authentication and this Python page on fetching internet resources.

I also found PyCurl (*) which probably would have helped a lot! I've never had any problems with libcurl. Another resource is ask/python-github2 which uses httplib2 (**). I have also read on Stack Overflow about Requests which actually has a Github api example on their PyPi page. New is always better, so that's probably the way to go. In fact the Requests site goes so far as to say that the urllib2 API is "thoroughly broken" which in my limited experience is the truth.

(*) Update (2012-03-14): PycURL is not actively maintained; this package (7.19) was last updated in 2008. You must install libcurl to use PycURL. If you try to use the tarball from the PycURL website, it requires the install option --curl-dir=c:\your\src\dir, which is the folder of your libcurl files. The setup.py file looks for libcurl.lib, but in the current release of libcurl (7.24), this file is renamed, so the setup fails. Same for ssl setup. An alternative installation for Windows (32/64-bit Python 2.6/7) is available as an executable from Christoph Gohlke's Python extensions. On Ubuntu Linux, python-pycurl is maintained on launchpad.

OK, one more totally obvious way to send basic authentication is to include it in the url and open it with urllib not urllib2.

>>> import urllib
>>> urllib.urlopen("http://username:password@your.url.com")

Python, I'm not as psyched as I once was. Something this simple shouldn't be so totally non-obvious to so many people. I've starred no less than ten SO posts all related to the same thing, and not once has anyone gotten urllib2 to work the way it's supposedly meant.

Answered! I finally figured it out! Here is the Github v2 api response:

Server: nginx/1.0.13
Date: Wed, 14 Mar 2012 07:37:16 GMT
Content-Type: application/json; charset=utf-8
Connection: close
Status: 401 Unauthorized
X-RateLimit-Limit: 60
X-Frame-Options: deny
X-RateLimit-Remaining: 59
X-Runtime: 7
Content-Length: 26
Cache-Control: no-cache

Python urllib2 is expecting to see "www-authenticate" but since it's not there, it does not send the login info. Problem solved. I saw this SO post but it was also in the missing urllib2 manual all along.

(**) Update (2012-03-15): BTW: httplib2 has the exact same problem as urllib2 according the SO post I mentioned above; the site must return "WWW-Authenticate" in the header or no credentials are sent.

Monday, March 5, 2012

Apache is not apache2

Looking for Apache on Ubuntu?  Search for apache2 (note the 2). I have made a habit of looking for packages at http://packages.ubuntu.com.

Also downloading synaptic was probably one of the smartest things I've done. But Google how to add the orphans filter or see my post on KDiff3 Katastrophe.

Fork me on GitHub