Unfortunately you can't depend on makemigrations to generate the correct SQL to migrate and cast data from a scalar field to a PostgreSQL ARRAY. But Django provides a nifty RunSQL that's also described in this post, "Down and Dirty - 9/25/2013" by Aeracode, the original creator of South predecessor of Django migrations. That post even mentions using RunSQL to alter a column using CAST.
The issue and trick to migrating a column to an ArrayField is given by PostgreSQL in the traceback, which says:
column "my_field " cannot be cast automatically to type double precision[]
HINT: Specify a USING expression to perform the conversion.
Further hints can be found by rtfm and searching the internet, such this stackoverflow Q&A.
My procedure was to use makemigrations to get the state_operations and then wrap each one into a RunSQL migration operation.
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
import datetime
from django.utils.timezone import utc
import django.contrib.postgres.fields
import simengapi_app.models
import django.core.validators
class Migration(migrations.Migration):
dependencies = [
('my_app', '0XYZ_auto_YYYYMMDD_hhmm'),
]
operations = [
migrations.RunSQL(
"""
ALTER TABLE my_app_mymodel
ALTER COLUMN "my_field"
TYPE double precision[]
USING array["my_field"]::double precision[];
""",
state_operations=[
migrations.AlterField(
model_name='mymodel',
name='my_field',
field=django.contrib.postgres.fields.ArrayField(
base_field=models.FloatField(), default=list,
verbose_name=b'my field', size=None
),
)
],
),
]
Here is a technique I've used to input lists of primitive types and serializers with many=True
from functools import partial
from rest_framework import viewsets
from rest_framework.response import Response
from rest_framework import status
from my_app.serializers import MyNestedModelSerializer
...
class MyNestedModelViewSet(viewsets.ViewSet):
serializer_class = MyNestedModelSerializer
def create(self, request):
serializer = self.serializer_class(data=request.data)
# get the submodel list serializer since it can't render/parse html
submodel_list_serializer = serializer.fields['submodels']
# make a partial function by setting the submodel list serializer
partial_get_value = partial(custom_get_value, submodel_list_serializer)
# monkey patch submodel_list_serializer.get_value() with partial function
submodel_list_serializer.get_value = partial_get_value
if serializer.is_valid():
simulate_data = serializer.save()
# do stuff ...
return Response(serializer.data, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
The function `custom_get_value()` uses JSON to parse the input:
def custom_get_value(serializer, dictionary):
if serializer.field_name not in dictionary:
if getattr(serializer.root, 'partial', False):
return empty
# We override the default field access in order to support
# lists in HTML forms.
if html.is_html_input(dictionary):
listval = dictionary.getlist(serializer.field_name)
if len(listval) == 1 and isinstance(listval[0], basestring):
# get only item in value list, strip leading/trailing whitespace
listval = listval[0].strip()
# add brackets if missing so that it's a JSON list
if not (listval.startswith('[') and listval.endswith(']')):
listval = '[' + listval + ']'
# try to deserialize JSON string
try:
listval = json.loads(listval)
except ValueError as err:
# return original string and log error
pass
# set the field with the new value list
dictionary.setlist(serializer.field_name, listval)
val = dictionary.getlist(serializer.field_name, [])
if len(val) > 0:
# Support QueryDict lists in HTML input.
return val
return html.parse_html_list(dictionary, prefix=serializer.field_name)
return dictionary.get(serializer.field_name, empty)
Love struck super villain (Neil Patrick Harris) loses to super hero (Nathan Fillion) musical. This just never get’s old. Almost as good as the official Star Wars trailer.
FYI: You do not need to use `obj.empty` to preallocate an object array.
In fact as soon as you assign a value to any element in the object array it grows the array to that size, which allocates (or reallocates) RAM for the new object array, therefore defeating the point of preallocating space.
“If you make an assignment to a property value, MATLAB calls the SimpleClass constructor to grow the array to the require size:”
Instead if you want to preallocate space for an object array, grow the array once by assigning the last object first. This requires the class to have a no-arg constructor. Each time you grow your array you will reallocate RAM for it, wasting time and space, so do it once with the max expected size of the array. See Initialize Object Arrays and Initializing Arrays of Handle Objects in the OOP documentation.
>> S(max_size) = MyClass(args)
Another option is to preallocate any other container like a cell array (best IMHO), structure or containers.Map and then fill in the class objects as they are created. An advantage to this is you don’t have to subclass matlab.mixin.Heterogeneous to group different classes together.
>> S = cell(max_size); args = {1,2,3;4,5,6;7,8,9};
>> for x = 1:size(args,1), S(x) = MyClass(args{x,:});end
The only time to use an empty object is if you want it as a default for the situation where nothing gets instantiated, and you need the it be an instance of the class. Of course any empty array will do this, IE: '', [] and {} are also empty.
>> S = MyClass.empty
>> if blah,S = MyClass(args);end
>> if isa(S, 'MyClass') && isempty(S),do stuff; end
I hope this helps someone; it definitely helped me understand the odd nature of MATLAB. This behavior is because everything in MATLAB is an array, even a scalar is a <1x1 double> read the C-API mxArray for external references and mwArray for compiled/deployed MATLAB for more info.
MATLAB = Matrix Laboratory
Class definitions didn’t appear until 2008. Other languages like C++, Java, Python and Ruby are object first. So the empty method is meant to duplicate the ability to be empty similar to other MATLAB datatypes such as double, cell, struct, etc. IMO outside of MATLAB it's a very artificial and somewhat meaningless construct.
Basically this comes with everything you need to work on Windows, Mac or Linux. It uses rsync for transport, and the rest is mostly written in Python but does depend on some libraries that are standard in Linux and have mature ports in Windows and Mac. One of the major benefits of git-fat over git-media is that it uses a .gitfat config file which updates your .git/config when git fat init is run. This is similar to git submodules and makes repos portable. In general there's more functionality and features than git-media. For example, you can list the files managed by git-fat, check for orphans and pull data from or push data to your remote storage. The only catch is that the wheel file at PyPI has metatags for win32 not amd64. This is easy to fix, but I think there are a couple of use cases that might differ from how the distribution was implemented.
Bootstrap
If you look at a Linux install, the repository has a symlink to git_fat.py in bin called git-fat. Why not just bootstrap git-fat if we only really need one file. Just dump everything in a single folder, change the file name to git-fat and make sure there's a shebang that uses #! /usr/bin/env python which git seems to prefer, then stick it on your path. This works for both msys-git and Windows cmd.
MSYS-Git
msysgit comes with Git Bash, a posix shell which includes many Linux libraries ported to Windows, such as gawk and ssh. Unfortunately it does not come with rsync, however you can get rsync from the msys source either from mingw-w64 (that's where I got it), from msys2, from the original mingw project, from mingw builds and from lots of places. You could even get it from cygwin. I usually stick files like this in my local bin folder which is always first on my path in git bash. You'll need to also grab the iconv, intl, lzma and popt msys libraries which rsync depends on. Anyway, since you have these libraries, you don't need the ones bundled in the wheel, however, git-fat is written to look for those bundled files if it detects that your platform is Windows, so just comment out those lines. You will need to change awk to gawk, since awk is a shell script that calls gawk. Again you can bootstrap this file, ie: put it in your local bin folder or install it into your Python site-packages and/or scripts folder.
__main__
This is the way I ended up using it. You can download my version here and install it with pip. I put the windows libraries into the site-packages git-fat folder instead of in scripts and then in the git-fat script, added the site-packages git-fat folder to the shell's path. Then I called the git-fat module as a script by adding a __main__.py file to it which basically imports git_fat.py and calls it using Python with the -m option, but you could just as easily call the module as a script. This just keeps these extra libraries bundled together rather than dumping them into the scripts folder with everything else. Also since I mostly use git bash it doesn't put git-fat's libraries ahead of git's since they both use gawk and ssh.
Usage
Usage is extremely easy compared to git-media, which is a plus! Note these instructions are for msysgit git bash. For Windows cmd window replace git fat with git-fat everywhere. Both methods should work fine.
Clone a repo that uses git-fat: git clone my://remote/repo.git
At this point there are only placeholders for your files with the same names, but just sha numbers that tell git-fat which file to grab from your remote storage
Run git fat init which sets up the filters and smudges that tell your local repo how to use git-fat with the .gitattributes file which is part of the repo already.
Run git fat pull which downloads your files from the remote storage specified in the .gitfat, which is also already in the repo
Run git fat list to see a list of managed files
Run git fat status to see a list of orphans waiting to be pulled/pushed?
Create a .gitfat file that specifies where rsync should store files. Note there are no indents. A windows UNC path seems to work fine.
[rsync]
remote = //server/share/repo/fat
Create a .gitattributes file to specify which files to store at the remote
Commit the .gitfat and .gitattribute files
Run git fat init to set up your local .git/config
Hack, commit, push, etc.
Run git fat push to send stuff to your remote
Git-Media - sucky
Finally time to install Ruby.
You're going to need it if you want to use
git-media which let's
you mix big biinary files within your git repo, but store them in some
remote host, which could be google-drive, amazon s3, another server via
ssh/scp or a network share. Why don't you want to store big binary files
in your git repo? Since Git stores each revision instead of deltas, that
means that it will quickly blow up as you make new commits.
RubyInstaller
Super easy, they recommend 2.1, no admin rights required, unzips into c:\
just like python, I checked all of the options: tk/tcl, add ruby to path
and what was the last option? Then I ran gem update.
git-media gem
gem install git-media trollup right_aws
right_aws OpenSSL::Digest issue
There is a tiny issue with right_aws where it outputs the message:
The readme on the github overview page has everything you need to know.
other large file storage
git-lfs: relative newcomer, developed by github for Github large file storage. Git-LFS appears to require an LFS server which GitHub intends to monetize. Sounds promising, it is open sourced, but not sure if it can be implemented easily as a standalone system unfortunately.
There are also a packages that will create a boiler plate project layout for you but I wouldn't recommend them except as reference guides - the tutorial by NSLS-2 being the notable exception, PTAL!
Bootstrap a Scientific Python Library: This is a tutorial with a template for packaging, testing, documenting, and publishing scientific Python code.
Cookiecutter: A command-line utility that creates projects from cookiecutters (project templates), e.g. creating a Python package project from a Python package project template.
It's hard to pin a standard style down. Here’s mine:
MyProject/ <- git repository
|
+- .gitignore <- *.pyc, IDE files, venv/, build/, dist/, doc/_build, etc.
|
+- requirements.txt <- to install into a virtualenv
|
+- setup.py <- use setuptools, include packages, extensions, scripts and data
|
+- MANIFEST.in <- files to include in or exclude from sdist
|
+- readme.rst <- incorporate into setup.py and docs
|
+- changes.rst <- release notes, incorporate into setup.py and docs
|
+- myproject_script.py <- script to run myproject from command line, use Python
| argparse for command line arguments put shebang
| `#! /usr/bin/env python` on 1st line and end with a
| `if __name__ == "__main__":` section, include in
| setup.py scripts section for install
|
+- any_other_scripts.py <- scripts for configuration, documentation generation
| or downloading assets, etc., include in setup.py
|
+- venv/ <- virtual environment to run tests, validate setup.py, development
|
+- myproject/ <- top level package keeps sub-packages and package-data together
| for install
|
+- __init__.py <- contains __version__, an API by importing key modules,
| classes, functions and constants, __all__ for easy import
|
+- docs/ <- use Sphinx to auto-generate documentation
|
+- tests/ <- use nose to perform unit tests
|
+- other_package_data/ <- images, data files, include in setup.py
|
+- core/ <- main source code for myproject, sometimes called `lib`
| |
| +- __init__.py <- necessary to make mypoject_lib a sub-package
| |
| +- … <- the rest of the folders and files in myproject
|
+- related_project/ <- a GUI library that uses myproject_lib or tools that
| myproject_lib depends on that's bundled together, etc.
|
+- __init__.py <- necessary to make related_project a sub-package
|
+- … <- the rest of the folders and files in your the related project
Get Python and install it on your system. You may need a working binary to bootstrap the amd64 build.
Get a working version of Microsoft SDK for Windows 7 (7.0). AFAIK Visual Studio 2013 Express Desktop or Community editions include both SDK 7.0 and 7.1, so alternately install that. Make sure that you include the redistributables in when installing the SDK because you will need them to distribute your Python build. See upgrade to vs2013 for fixes to some issues you may encounter especially if you have some other VC components already installed.
Change to the directory where the source tarball is extracted.
Patch the Tools/buildbot/externals batch script exactly as described in the PCbuild readme. I added the release build immediately after the debug fields.
if not exist tcltk\bin\tcl85.dll (
@rem all and install need to be separate invocations, otherwise nmakehlp is not found on install
cd tcl-8.5.15.0\win
nmake -f makefile.vc INSTALLDIR=..\..\tcltk clean all
nmake -f makefile.vc INSTALLDIR=..\..\tcltk install
cd ..\..
)
if not exist tcltk\bin\tk85.dll (
cd tk-8.5.15.0\win
nmake -f makefile.vc INSTALLDIR=..\..\tcltk TCLDIR=..\..\tcl-8.5.15.0 clean
nmake -f makefile.vc INSTALLDIR=..\..\tcltk TCLDIR=..\..\tcl-8.5.15.0 all
nmake -f makefile.vc INSTALLDIR=..\..\tcltk TCLDIR=..\..\tcl-8.5.15.0 install
cd ..\..
)
if not exist tcltk\lib\tix8.4.3\tix84.dll (
cd tix-8.4.3.5\win
nmake -f python.mak DEBUG=0 MACHINE=IX86 TCL_DIR=..\..\tcl-8.5.15.0 TK_DIR=..\..\tk-8.5.15.0 INSTALL_DIR=..\..\tcltk clean
nmake -f python.mak DEBUG=0 MACHINE=IX86 TCL_DIR=..\..\tcl-8.5.15.0 TK_DIR=..\..\tk-8.5.15.0 INSTALL_DIR=..\..\tcltk all
nmake -f python.mak DEBUG=0 MACHINE=IX86 TCL_DIR=..\..\tcl-8.5.15.0 TK_DIR=..\..\tk-8.5.15.0 INSTALL_DIR=..\..\tcltk install
cd ..\..
)
From the archive root (Python-2.7.9) call the externals batch script. It will copy and build all of the externals from svn.python.org in a folder called externals/.
Now cd to PCbuild and call build.bat. Voila, python.exe for x86.
Almost there. go back up to the root of the extracted tarball and rename externals to externals-x86.
Change the target to Release x64 by typing the following:
Set an environment variable HOST_PYTHON=C:\Python27\python.exe. You may not need this at all or you might be able to use the 32-bit version just built.
Patch the buildbot externals-amd64 batch script just like the x86 script.
if not exist tcltk64\bin\tcl85.dll (
cd tcl-8.5.15.0\win
nmake -f makefile.vc MACHINE=AMD64 INSTALLDIR=..\..\tcltk64 clean all
nmake -f makefile.vc MACHINE=AMD64 INSTALLDIR=..\..\tcltk64 install
cd ..\..
)
if not exist tcltk64\bin\tk85.dll (
cd tk-8.5.15.0\win
nmake -f makefile.vc MACHINE=AMD64 INSTALLDIR=..\..\tcltk64 TCLDIR=..\..\tcl-8.5.15.0 clean
nmake -f makefile.vc MACHINE=AMD64 INSTALLDIR=..\..\tcltk64 TCLDIR=..\..\tcl-8.5.15.0 all
nmake -f makefile.vc MACHINE=AMD64 INSTALLDIR=..\..\tcltk64 TCLDIR=..\..\tcl-8.5.15.0 install
cd ..\..
)
if not exist tcltk64\lib\tix8.4.3\tix84.dll (
cd tix-8.4.3.5\win
nmake -f python.mak DEBUG=0 MACHINE=AMD64 TCL_DIR=..\..\tcl-8.5.15.0 TK_DIR=..\..\tk-8.5.15.0 INSTALL_DIR=..\..\tcltk64 clean
nmake -f python.mak DEBUG=0 MACHINE=AMD64 TCL_DIR=..\..\tcl-8.5.15.0 TK_DIR=..\..\tk-8.5.15.0 INSTALL_DIR=..\..\tcltk64 all
nmake -f python.mak DEBUG=0 MACHINE=AMD64 TCL_DIR=..\..\tcl-8.5.15.0 TK_DIR=..\..\tk-8.5.15.0 INSTALL_DIR=..\..\tcltk64 install
cd ..\..
)
From the archive root (Python-2.7.9) call the externals-amd64 batch script
Finally cd back to PCbuild and call build.bat -p x64. Voila, python.exe for x64.
Add externals/tcltk to your path and run the tests
Copy the VC runtime from Program Files (x86)\Microsoft Visual Studio 9.0\VC\redist\x86\Microsoft.VC90.CRT\to PCbuild folder, and
Program Files (x86)\Microsoft Visual Studio 9.0\VC\redist\amd64\Microsoft.VC90.CRT\to PCbuild\amd64 folder.
To distribute create a similar file structure for both archtypes and copy the files into the folders
Python27
|
+-python.exe
|
+-pythonw.exe
|
+-python27.dll
|
+-msvcr90.dll <- from VC/redist/MICROSOFT.VC90.CRT
|
+-msvcp90.dll <- from VC/redist/MICROSOFT.VC90.CRT
|
+-msvcm90.dll <- from VC/redist/MICROSOFT.VC90.CRT
|
+-MICROSOFT.VC90.CRT.manifest <- from VC/redist/MICROSOFT.VC90.CRT
|
+-DLLs <- all externals/tcltk/bin, PCbuild/*.dll & PCbuild/*.pyd files
| & PC/py.ico, PC/pyc.ico & PC/pycon.ico.
|
+-Lib <- same as source archive except all *.pyc files, all plat-* folders
| & ensurepip folder
|
+-libs <- all PCbuild/*.lib files
|
+-include <- same as source archive + PC/pyconfig.h
|
+-tcl <- everything in tcltk/lib
|
+-Scripts <- PCbuild/idle.bat
|
+-Doc <- set %PYTHON%=python.exe and build Sphinx docs with Sphinx if you have it
Note: wish85.exe and tclsh85.exe won't work with this Python installed file structure although it will work in the externals bin folder because they look for the tcl85.dll in ../lib. Also note that idle.bat needs some fixin. And also it's very important to know that most executables in Scripts created by installer packages have the path to python.exe hardwired, IE: they are initially not portable, however check out this blog for a few quick tricks to fix them.
This issue starts simple enough, but then unravels to reveal some very interesting insights. Evidently creating a Python for Windows installer that does not depend on administrator rights is not as easy as it seems. The question comes down to what we really need. Steve Dover from Microsoft breaks it down like this:
Python as application
This is when someone wants to use python to write scripts, do analysis, etc. Python is an application on their windows machine just like MATLAB or Excel. This could be installed per-user and possible without administrative rights.
Python as library
This version of Python could be used to embed Python in an application. Something similar to what pyexe and other python-freezing packages do. This could be a zip file.
Python as-if system
This version would be installed on a system, possibly by system admins, in a custom windows build, similar to the way that Python is integrated into Mac and Linux. It would require admin rights, and could be used by system admins to add custom functionality to the corporate OS.
Let me say 1st off that I think that #3 is absurd. I can't imagine Windows system admins ever using Python in this way. Most of them have never even heard of Python. And there is an entire .NET infrastructure to do exactly this. Why would you use Python instead of C#? Windows will never, ever be like Linux. It is not a POSIX environment, it does not use GNU tools and it does not need Python.
I think that #1 and #2 could serve the same purpose and should really be the only option available. Users who want to use Python on Windows should unzip the Python Windows distro to their System Root (ie: C:\Python27) and use it. No admin rights required. End of story. It should contain the msvcrt90 redistributable and all libraries it needs. There should be no other dependencies.
There is also a 0th option - do not distribute a binary for Windows at all. Let Windows users build from source themselves, or recommend alternate distribution, which Python.org already does on its alternative distribution page.. This is what Ruby does, and perhaps it's the best way to satisfy everyone. But the fact that official Python is available for windows is a very nice thing. Althoght Enthought and ActiveState have been around for a long time, they are private and could go out of business. Nevertheless, this does seem to be the path people are taking.
Anaconda, from Continuum Analytics, a relative newcomer, founded by Peter Wang and Travis Oliphant formerly from Enthought, seems to have become, almost overnight, the most popular source of Python on windows. It's baby sister, Miniconda, is less know but merely installs the conda package/environment manager and Python-2.7 which can be used to install more Python version and packages, whereas Anaconda preinstalls most sci-data packages. The only major concern for me with Anaconda is that it is closed source. Is it the python.org version built out of the box?WinPython on the other hand is open source on github and offers both 32 and 64 bit versions that do not require admin rights to instal. Enthought is also closed source and PortablePython only offers an older out of date 32bit version. There is also PythonXY but for me it seemed buggy.
Not sure what the future of Python on Windows will look like. If you are interested in shaping that future, I suggest you contact one of the Python devs and let them know what your use case is.
Welcome to the future. It's nice of you to join us. Have you been limping along with very old outdated C/C++/C# toolsets? Still using Visual Studio 2010? 2008? Afraid to uninstall them for fear you will lose the ability to compile your projects. Let's take care of that right now, don't worry about a thing. In about 1-2 hours, you will be happily in the future, enjoying the modern conveniences of Visual Studio 2013. It's very nice here. Won't you join us?
Remove Visual Studio 2010 SP1 and run the uninstall utility
Name
Size
Version
Microsoft Visual Studio 2010 Service Pack 1
75.9 MB
10.0.40219
Microsoft Visual C++ 2010 Express - ENU
10.0.40219
Microsoft Visual C# 2010 Express - ENU
10.0.40219
Microsoft Visual Studio 2010 Express Prerequisites x64 - ENU
21.6 MB
10.0.40219
You can follow the instructions I posted in the 3/13/2015 update to Download sites for old, free MS compilers and IDEs but the system restore point has dubious value. In fact it didn't help at all. When I was in trouble I found myself reinstalling the application then removing it again in the correct order. The key here is to uninstall SP1 first, then use Stewart Heath's VS2010 uninstall utility in default mode. If you find yourself in trouble, reinstall VS2010 and VS2010-SP1. If you need installers I have them here in my dropbox.
Remove Visual Studio 2008 SP1.
Name
Size
Version
Microsoft Visual C++ 2008 Express Edition with SP1 - ENU
Same here, make sure you have an installer. The web installer in my dropbox still works surprisingly. Any trouble, reinstall and uninstall again.
Remove both SDKs for Windows 7 and .NET Frameworks 3.5 and 4.0
Name
Size
Version
Microsoft SDK for Windows 7 (7.1)
7.0.7600.16385.40715
Microsoft SDK for Windows 7 (7.0)
7.1.7600.0.30514
If you have any issues with this, look at the last entry in the log. There's a View Log button when setup fails next to the Finish button. A part of the SDK that the installer is looking for may be missing. Search for the keyword "fail" and "unknown source". If you find an unknown source in the log file, download and extract the ISO from the SDK archives page and run the installer for the missing component. Any archive client will work. I use 7-zip 9-22beta. For SDK 7.0 I had to reinstall Intellidocs before I could completely remove the SDK. And for SDK 7.1 I had to install the Windows Performance Toolkit to remove the SDK completely. Only the ISO will work here, not the redistributables.
Also beware of the Microsoft Install/Uninstall Fixit it doesn't actually do anything but clean your registry. It removed both SDKs but then wouldn't let me reinstall them, all of the files were still in C:\Program Files\Microsoft SDKs\Windows\v7.x all that was different was that they were not in the add/remove programs control panel.
Remove everything else
Name
Size
Version
Microsoft Document Explorer 2008
Microsoft Help Viewer 1.1
3.97 MB
1.1.40219
Microsoft SQL Server 2008 R2 Management Objects
12.4 MB
10.50.1750.9
Microsoft SQL Server Compact 3.5 SP2 ENU
3.39 MB
3.5.8080.0
Microsoft SQL Server Compact 3.5 SP2 x64 ENU
4.50 MB
3.5.8080.0
Microsoft SQL Server System CLR Types
930 KB
10.50.1750.9
Application Verifier (x64)
55.3 MB
4.1.1078
Debugging Tools for Windows (x64)
39.8 MB
6.12.2.633
Microsoft Visual Studio 2008 Remote Debugger light (x64) - ENU
Microsoft Visual Studio 2010 ADO.NET Entity Framework Tools
34.2 MB
10.0.40219
Microsoft Visual Studio 2010 Tools for Office Runtime (x64)
10.0.50903
Microsoft Windows Performance Toolkit
26.1 MB
4.8.0
Microsoft Windows SDK for Visual Studio 2008 Headers and Libraries
114 MB
6.1.5288.17011
Microsoft Windows SDK for Visual Studio 2008 SP1 Express Tools for .NET Framework - enu
4.41 MB
3.5.30729
Microsoft Windows SDK for Visual Studio 2008 SP1 Express Tools for Win32
2.61 MB
6.1.5295.17011
As you can see there is a lot of detritus left behind.
Remove the 2008 & 2010 C++ compilers and the Visual C++ 2010 SP1 redistributables.
Name
Size
Version
Microsoft Visual C++ Compilers 2008 Standard Edition - enu - x64
127 MB
9.0.30729
Microsoft Visual C++ Compilers 2008 Standard Edition - enu - x86
321 MB
9.0.30729
Microsoft Visual C++ Compilers 2010 SP1 Standard - x64
206 MB
10.0.40219
Microsoft Visual C++ Compilers 2010 SP1 Standard - x86
613 MB
10.0.40219
Microsoft Visual C++ 2010 x64 Redistributable - 10.0.40219
6.86 MB
10.0.40219
Microsoft Visual C++ 2010 x86 Redistributable - 10.0.40219
5.44 MB
10.0.40219
These will be reinstalled later. You can not install Windows SDK for Windows 7 (7.1) with NET 4.0 Framework if you already have the Visual C++ 2010 SP1 redistributable installed or you will get the dreaded error 5100.
The standalone Python compiler will install the VS2008 (VC90) compilers and headers as well as the vcvarsall.bat batch file that sets environment variables necessary to build Python packages on the fly using pip and setuptools>=6.0. However to build packages using distutils, i.e.: python setup.py build you will need to patch vcvarsall.bat in your C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC directory. To do this copy the vcvarsall.txt file that the SDK created as vcvarsall.bat, then patch it with the Gist in my post. i.e.: patch vcvarsall.bat vcvarsall.bat.patch in bash after downloading and extracting the Gist.
The Express edition only has VB, C# and C/C++ compilers, and does not allow extensions, whereas the community edition has everything, but is restricted for commercial use in large corporations.
% get or create a new database
db = com.almworks.sqlite4java.SQLiteConnection(java.io.File('sample.db'))
db.open % open database
% create a table called “person” with 2 columns, name and id
db.exec('create table person (id integer, name string)')
% add rows to “person” table
db.exec('insert into person values(1, "leo")')
db.exec('insert into person values(2, "yui")')
db.dispose % dispose of db handle
% optionally close and reopen database to see it persists
db = com.almworks.sqlite4java.SQLiteConnection(java.io.File('sample.db'))
db.open
% create a prepared statement with ? wildcard
st = db.prepare('select * from person where id>?')
st.bind(1,0) % bind 1st ? wildcard to any number greater than 0
% binding the prepared statment also works for strings
% st = db.prepare('select * from person where name>=?')
% st.bind(1,'') % bind 1st ? wildcard
% note: all string are greater than or equal to ''
% step through matching rows
while st.step
% returning the data type from the desired column
st.columnInt(0) % get IDs from column 0
st.columnString(1) % get name from column 1
end
% disposed of used up statement container
st.dispose
st.isDisposed
% ditto for db connection
db.dispose
db.isDisposed
% output
ans = 1
ans = leo
ans = 2
ans = yui
Although IMO xerial’s jdbc driver (with sqlite included) is much easier
% https://bitbucket.org/xerial/sqlite-jdbc/wiki/Usage
javaaddpath('C:\Users\mmikofski\Documents\MATLAB\sqlite\sqlite-jdbc-3.8.7.jar')
d = org.sqlite.JDBC
p = java.util.Properties()
c = d.createConnection('jdbc:sqlite:sample.db',p) % named file
% optional connections
% c = d.createConnection('jdbc:sqlite:C:/full/path/to/sample.db',p) % full path
% c = d.createConnection('jdbc:sqlite::memory:',p) % memory db
% c = d.createConnection('jdbc:sqlite:',p) % default
s = c.createStatement() % create a statement
% create a table, insert rows, etc.
% s.executeUpdate('create table person (id integer, name string)');
% s.executeUpdate('insert into person values(1, "leo")');
% s.executeUpdate('insert into person values(2, "yui")');
% execute query, get id and name
rs = s.executeQuery('select * from person')
while rs.next
rs.getString('id')
rs.getString('name')
end
c.close % close connection
c.isClosed
% output
ans = 1
ans = leo
ans = 2
ans = yui
These are both extremely light and fast but offer somewhat more quality than google pretify which I also mentioned in syntax sensation. One thing I will say about both of these is that looking at the resulting DOM, highlight.js prepends hljs- to all of its classes, almost like a namespace, so that it's unlikely that there will ever be conflicts with any other plugin. highlight.js has many languages and styles while Prism has many extra plugins like line-numbers that you can add to your build from their download page. Finally both of these new syntax highlighter's conform to the <pre><code class="language-blah"></code></pre> style that evidently is the standard for putting code into HTML documents. Who knew? SyntaxHighlighter only uses <pre class="brush: blah"></pre> which is non-standard, I guess. Nit-pick much?
Check out for yourself how Bootstrap interacts with each syntax highlighter in the iframe below, or click the link to open in a new tab. The option menu on the right side of the navbar lets you choose which syntax highlighter to see. The template is Bootstrap's theme example which you can return to by clicking the brand on the left side of the navbar. Since highlight.hs comes with 49 styles, you can peruse them from the dropdown menu. Let me know if you find anything amiss anywhere. Of course you will see the extra y-scrollbar in the SyntaxHightlighter rendition.
UPDATE: 2015-02-25 Today I nearly crapped a cow. I was testing out a custom ErrorDocument 401 directive that would redirect back to the sign in page (BTW: that's a bad idea, IE & Firefox sign on windows are modal). I clicked OK with empty username and empty password fields, and I got the dreaded Internal Sever Error HTTP/1.1 500 page. Then because the browser had cached the empty creds, I could not get back on the server. Clearing the cache and browser history had no effect. I actually thought I had broken Apache! Stack Exchange ServerFault to the rescue. The fix is to set AuthLDAPBindAuthoritative off in httpd.conf.
This is surprisingly easy, although there is some new syntax to learn, and you will need to get some info from your system administrator. Here are some steps for Apache-2.4 from ApacheLounge.
Note: This will change how Django works; for example, any authorized user not in the Django Users model will have their username automatically added and set to active, but their password and the is_staff attribute will not be set.
Get the URL or IP address of your Active Directory server from your system administrator. For LDAP with basic authentication, the port is usually 389, but check to make sure.
Also get the "Distringuished Name" of the "search base" from your system administrator. A "Distringuished Name" is LDAP lingo for a string made up of several components, usually the "Organizational Unit (OU)" and the "Domain Components (DC)", that distinguish entries in the Active Directory.
Finally ask your system administrator to set up a "binding" distinguished name and password to authorize searches of the Active Directory.
Also in httpd.conf add a Location for the URL endpoint, EG: / for the entire website, to be password protected.
You must set AuthName. This will be displayed to the user when they are prompted to enter their credentials.
Also must also set AuthType, AuthBasicProvider, AuthLDAPUrl and Require. Prepend ldap:// to your AD server name and append the port, base DN, scope, attribute and search filter. The port is separated by a colon (:), the base DN by a slash (/) and the other parameters by question marks (?) such as:
ldap://host:port/basedn?attribute?scope?filter
<Location />
AuthName "Please enter your SSO credentials."
AuthBasicProvider ldap
AuthType basic
AuthLDAPUrl "ldap://my.activedirectory.com:389/OU=Offices,DC=activedirectory,DC=com?sAMAccountName"
AuthLDAPBindDN "CN=binding_account,OU=Administrators,DC=activedirectory,DC=com"
AuthLDAPBindPassword binding_password
AuthLDAPBindAuthoritative off
LDAPReferrals off
Require valid-user
</Location>
The default "scope" is sub which means it will search the base DN and everything below it in the Active Directory. And the default "filter" is (objectClass=*) which is the equivalent of no filter.
(70023)This function has not been implemented on this platform: AH01277: LDAP: Unable to add rebind cross reference entry. Out of memory?
Finally, restart your Apache httpd server and test out your site.
Now when users go to your Django site, when they open the location that requires authentication they will see a pop up that asks for their credentials.
Loggout
In addition to adding authenticated users to the Django Users model, the users credentials are stored in the browser. This makes logging out akward since the user will need to close their browser to logout. There are several approaches to get Django to logout a user.
redirect the user to a URL with fake basic authentication prepended to the path.
http://log:out@example.com
render a template with status set to 401 which is the code for unauthorized that will clear the credentials in browser cache.
from django.shortcuts import render
from django.contrib.auth import logout as auth_logout
import logging # import the logging library
logger = logging.getLogger(__name__) # Get an instance of a logger
def logout(request):
"""
Replaces ``django.contrib.auth.views.logout``.
"""
logger.debug('user %s logging out', request.user.username)
auth_logout(request)
return render(request, 'index.html', status=401)
Using Telnet to ping AD server
A lot of sites suggest this. First you will need to enable Telnet on your Windows PC. This can be done from Uninstall a program in the Control Panel by selecting Turn Windows features on or off and checking Telnet Client. Then opening a command terminal and typing telnet followed by open my.activedirectory.com 389. Surprise! If it works you will only see the output:
Connecting to my.activedirectory.com...
If it does not work then you will see this additional output:
Could not open connection to the host, on port 389: Connect failed
Now treat yourself and try open towel.blinkenlights.nl. Use control + ] to kill the connection, then type quit to quit telnet.
Testing LDAP using Python
Python-LDAP
So to learn more about LDAP there are a couple of packages that you can use to interrogate and authenticate with and AD server using LDAP. Python-LDAP seems to be common and easy to use. It's based on OpenLDAP Here's a list of common LDAP Queries from Google.
If users will only use the Django application on a Windows PC which they already have been authorized, EG through windows logon, then using either mod_authnz_sspi or mod_authnz_ntlm to acquire those credentials from your Windows session is also an option.
There are several Django extensions and snippets that use Python-LDAP and override ModelBackend so that Django handles authorization and authentication instead of Apache.
Some Django extensions and snippets also exist to subclass ModelBackends to use PyWin32 to use local credentials from the current windows machine for authorization and authentication from within Django.
Sure you could do this. You can also use SSL with LDAP or Kerebos with SSPI/NTLM. But, alas, I did not research these options althought I did come across a few references.
CSS and JS
The references section loosely based on Javascript TOC robot. It could also use the counters and the ::before style pseudo-element, but since I'm using JavaScript it doesn't make sense. But here's what that looked like anyway.
In case it wasn't clear above the JavaScript below is not what I'm using on this page. It was for a different approach using counters which I scratched, so these examples are very contrived and don't really make sense anymore.
I am proud to introduce Quantities for MATLAB. Quantities is an units and uncertainties package for MATLAB. It is inspired by Pint, a Python package for quantities.
Installation
Clone or download the Quantities package to your MATLAB folder as +Quantities.
Usage
Construct a units registry, which contains all units, constants, prefixes and dimensions.
>> ureg = Quantities.unitRegistry
ureg =
Map with properties:
Count: 279
KeyType: char
ValueType: any
Optionally pass verbosity parameter to unitRegistry to see list of units loaded.
>> ureg = Quantities.unitRegistry('v',2)
Units and constants can be indexed from the unitRegsitry using their name or alias in parentheses or as dot-notation. The unit, constant and quantity class all subclass to double so you can perform any operation on them. Combining a double with a unit creates a quantity class object.
>> T1 = 45*ureg('celsius') % index units using parentheses or dot notation
T1 =
45 ± 0 [degC];
>> T2 = 123.3*ureg.degC % index units by name or by alias
T2 =
123.3 ± 0 [degC];
>> heat_loss = ureg.stefan_boltzmann_constant*(T1^4 - T2^4)
heat_loss =
-819814 ± 0 [gram*second^-3];
Perform operations. All units are converted to base.