NAT64 and DNS64

I have been investigating options for moving my network to a pure IPv6 stack. The main issue here is ensuring that there is still connectivity to the IPv4 Internet after the move. The best options that I have found that support this configuration is the NAT64/DNS64 stack.

Setting this up was a bit of a head-ache as the documentation was lacking for Tayga on Linux as a NAT64 router. That said, I had to follow the example strictly and I was able to replicate things on my OpenWRT 12.09 router.

Setting up the DNS64 settings was much easier and things worked after that. I was able to ping and connect to the IPv4 world on my pure IPv6 network. Unfortunately, I had trouble connecting to the IPv6 Internet instead. Thing is that my Internet connection is still pure IPv4.

So, I’m now investigating the possibility of running a DNS server that will not forward AAAA record lookups but things do not look good. There doesn’t seem to be any DNS server built with that feature. Looks like I’m going to have to roll my own.

I might have to look at the feasibility of modifying the TOTD source, once I can find it though.

SimpleSAMLphp with WordPress on OpenShift

These are the steps that I used to get SimpleSAMLphp running with WordPress on OpenShift.

First, copy the files up to the server and decompress them.

$ rhc scp MYAPP upload simplesamlphp.tar.gz app-root/data
$ rhc ssh MYAPP
$ cd app-root/data
$ tar -zxf simplesamlphp.tar.gz

Then, link it to a public WordPress directory e.g. uploads

$ cd uploads
$ ln -s ../simplesamlphp/www/ saml
$ cd ../simplesamlphp

Then, just configure SimpleSAMLphp as usual.

The only key thing to note is that the baseurlpath needs to be configured with a FULL path name. For some reason, SimpleSAMLphp was unable to detect that it was running behind a reverse proxy.

Experimenting with CORS

As a follow up to yesterday’s post on CORS, I did some simple experiments to test it out to see if it’ll work specifically by using JQuery primitives. It was easy as pie to make it work. The following little experiment demonstrates the feasibility of using CORS to use the browser as a middle-man.

I just wrote a simple cors.php script and fired it up from the browser.

If the Access-Control-Allow-Origin: * header is removed, then the console log will show the following error:

XMLHttpRequest cannot load http://127.0.0.1/cors.php. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost' is therefore not allowed access.

However, once it is in there, the console log shows the expected output, which is essentially the data payload being transmitted. My only concern now is the size of the data blob that can be transmitted via this method using JQuery. I gather that there should be a limit on the size of such a transmission.

Also, there is a concern on security. So, I will need to figure out a mechanism to protect the communications between the parties. There are many other Access-Control-Allow headers that can be returned such as those listed here.

This will require more research.

PS: According to this site, JQuery does not support CORS on IE. So that’s a browser limitation that I’ll have to keep in mind.

Cross-Origin Resource Sharing (CORS)

Cross-origin resource sharing (CORS) is an interesting technology that will be very useful for one of our new upcoming products. It is a technology that will allow us to use the Browser as a proxy to connect to a different machine using Javascript. It is also supported by all modern browsers, which is a good thing.

This technique, when combined with XMLHTTP connections will allow us to effectively create a tunnel between the primary server and a secondary server via the browser. This allows the public server to communicate directly with the private server across the web-browser using pure Javascript.

This is exciting…

Programmer Testing

One issue about hiring programmers is how to test their technical competencies. If you merely look at their CV, that is only part of the story. I would like to propose a simple way to actually test their ability to problem solve by writing a program.

There are many on-line testing sites available. However, these are all language dependent. Therefore, a programmer’s experience in the language does make a difference. Hiring for a narrow language is a bad idea and it’s better to test on general logic capabilities.

This is where I think that something like the LLVM intermediate language is useful. It is unlikely that anyone has ever learned to use that language before as it is exclusively generated by the compiler. Therefore, this eliminates any language bias that may inadvertently affect the candidates.

To be fair, I think that the candidate should be assigned the tasks and given a suitable time frame to learn the language and complete the task as necessary. I believe that a one-week time-frame should be sufficient to learn enough of the language to write simple programs. This also tests the ability of the candidate to pick up new languages, which is a necessity in our line.

It is also a powerful enough language to do anything. So, the programming tests can be designed to be as easy or as complicated as necessary.

Furthermore, it is an actual programming language that can be compiled into real machine code. This allows the written programs to be tested for functionality as well as performance. It is possible to throw a barrage of test cases at the generated application to see how it performs. The entire system can even be run in a sandbox.

The only thing that is needed then, is an interface to the system. Technically speaking, the candidates can use whatever text editor that they want, and they would only need to upload the code to a centralised testing server, which would then compile and test the code.

For testing digital design skills, an equivalent method can be drawn from using a dead meta language like Confluent. It is, again, highly unlikely that any person has had much experience with the language. The output can be used to generate syntactically correct Verilog/VHDL and tested with standard tools.

I think that this would be a cool project to work on, if I only had the time. Maybe it’s time to hire another intern to do it.

Ubuntu 12.04 LTS with Missing Dota2 Textures

There is a pretty well known bug with Dota2 under Linux on Intel graphics hardware. Essentially, models textures are missing as can be seen from the screenshot below:

I had at first tried installing lts-raring hardware enablement but that did not solve the problem and created new ones with my hardware. According to the bug-report, the problem was solved in Mesa 9.1.5 but lts-raring came with 9.1.4 only.

So, I downgraded back to lts-quantal and looked for another solution.

In the end, all I had to do was to add a 3rd party PPA to solve the problem:


# apt-add-repository ppa:glasen/intel-driver
# apt-get update
# apt-get -y dist-upgrade

This would install the Mesa 9.1.6 and Intel drivers, which solved the problem.

Now, I get to experience Dota2 in all its glory!

Quantal to Raring Upgrade

I faced the same Quantal to Raring upgrade problem on Ubuntu 12.04 LTS with Hardware Enablement as reported here.

In order to get it to work, I had to remove Xorg for Quantal then upgrade everything to Raring. I did this from the command line terminal:


$ service lightdm stop
$ apt-get autoremove xserver-xorg-lts-quantal
$ apt-get --install-recommends install linux-generic-lts-raring xserver-xorg-lts-raring libgl1-mesa-glx-lts-raring
$ apt-get install linux-tools-lts-raring

That was it. Upgraded to LTS-Raring hardware enablement kernel and Xorg.

PS: The only issue with this upgrade was that I lost back-light control on my XPS13 laptop as reported here.