Verifying Anonymity

When it comes to electronic voting, the biggest problem has always been the divergent requirements of maintaining the anonymity of a ballot versus the authentication and verification of a voter with the ballot. There is a lot of research going on to ensure that all of these requirements are taken care of. While the following is a good step in the right direction, it has one specific assumption – that the ballot sheets are given out randomly. It is unlikely that such a thing can be done fairly and easily.

Solution, looking for Problem?

Sometimes, I wonder if I am driven to solve problems or that I get caught up with the beauty of the solution instead. There are some technologies that I have built that are definitely useful – for something. However, I may be staring too closely at the solution to actually see the problem space. Therefore, I have decided not to work on the solution for a while but to look outside for inspiration instead. I already know what I want to do. Now, I just need to know where to apply it.

For the next couple of months, I will embark on a serious journey of creation. I plan to document the entire process on this blog and hope that it may one day be useful to somebody. For now, I will go for a short run followed by dinner and some shopping.

Surround Microphone

I was watching Glee and a thought occurred to me. It was the final episode of Season 1 at the regional singing competition. Rachel and Finn walked down the aisles through the audience singing their parts. Then, I thought to myself what it would have been like to be seated in the audience and listening to them sing. That was when it occurred to me.

It would be cool if they had wireless microphones on that were location aware and that information was integrated into the sound system of the room. So, as Rachel walked down the left aisle onto the stage, the surround sound system in the auditorium would amplify their voices, but it would sound to everyone in the audience that the source of the voice was from exactly where Rachel was standing.

This should not be something too difficult to do. Let’s explore this idea a bit.

Most of the hardware complexity would be in the speakers and the software complexity would lie in the sound system. The trick would be to transmit a hidden signal within the actual audio transmission. This signal would be picked up by the microphone on the singer. Using some algorithm magic, we can estimate the position of the microphone in the room in relation to the position of the speakers. This can then be used to calibrate the voice stream received by the system to do its surround sound magic.

This has the advantage of being a cheap option. We can do this with present day equipment. This is the passive surround mic.

The hidden signal can either be embedded in the very low frequency or very high frequency range. Knowing how amplifiers work, it would be better to stick things in the lower frequency range. Then, to prevent picking up noise, we can borrow some ideas from infra-red transmitters – to not only transmit a signal but to transmit a digital signal over this low frequency. This allows checksumming and error correction to be done.

The resolution of such a system may not be too great, but it should be scores better than the overall ambient amplification we get today. The response time of the system may also not be that great. However, it will definitely be good enough to deal with most concert situations where people do not move faster than a speeding bullet.

To improve this all, we move onto the active surround mic. We will just need to equip the microphone with a wireless transceiver and mount a bunch of base stations on the speakers. The mic can then triangulate its own position relative to each base station using the strength of the wireless signal and also transmit its voice data to the nearest base stations over the same frequency.

C’est bonne idée, n’est pas?

PS: I’m not sure if such things are already in the market, I am not an audiophile – but they certainly should be!

Little Big Computer

Wow!

After taking the customary bow of respect at the person who designed this level on LBP, I’d just like to say that this was not that difficult to do – albeit very time consuming.

He did not actually design a complete computer but focused on the main computational component – the arithmetic unit. In this particular case, the arithmetic unit can perform two functions – addition and subtraction.

What this video actually shows is the exemplification of the Church-Turing thesis. In this case, the PS3 has successfully performed a simulation of an arithmetic unit, within the confines of a simulated virtual world. Nice recursion.

Also, it might seem weird that this computer was constructed from moving parts instead of, say, electrons. However, before we were inundated with the world of electronic digital computers like today, we were once using mechanical computers too, such as the Zuse Z1 currently on display in Berlin. Our history of computing is filled with all kinds of computers.

Now, if only it could be turned into a really interesting gaming level.

Apple RISC Machines

The Internet is abuzz with rumours that Apple is considering to buy ARM – the makers of the ARM processor that has all but conquered the mobile computing space. In fact, the market has already reacted to such strong rumours and has caused the price of ARM stock to go up to £2.55 a share today.

Now, what do I think of the rumour?

The businessman in me says that it is baseless. Apple would have no incentive to buy ARM at all because ARM is an intellectual property company. It does not sell any real microprocessors but chooses instead to license out its designs to other people, of which Apple is a licensee. So, Apple can already do whatever it wants with the ARM core that it gets, short of re-selling it onto other people. It can pop it inside any product that it wishes to and even make modifications and customisations like it did for the A4 processors used in the iPad.

The only business reason for buying ARM would be to deny other competitors from using ARM chips. This makes some sense if you think of it from Steve’s point of view. ARM is undeniably the market leader in mobile computing for a reason – it has technology that allows its processors to run really fast while consuming little power. That is why everyone uses ARM cores. By controlling ARM, Steve would be able to essentially dictate who gets to make mobile devices and who does not.

However, the engineer in me thinks that Steve would be crazy to do that. Although the ARM processor is technically superior to its competition, it is by no means the only way to make mobile devices. If Apple blocks others from using ARM, there are many other people who would be happy to step into that market (including yours truly). It just does not make any sense for Apple to absorb ARM – considering that it would have to spend about $8 billion to acquire that asset.

Even if Steve decides that nobody else in the world can use ARM except Apple, they would not gain anything. Their chief rival in the mobile space – Google, would not even break a sweat. While the Android platform is currently based on ARM, there is no reason why it cannot be switched to MIPS or something else easily. The kernel is Linux, which supports dozens of microprocessor architectures besides ARM. So, while it would be a small hiccup, it would not be a show stopper.

What’s most likely happening is Apple interested in taking a significant stake in ARM. Now, that would make both engineering and business sense. A stake in ARM would allow Steve to ensure that Apple retains some sort of influence in that area as well and steers ARM in the right direction. It would also allow Apple to get cheaper licenses, which would allow Apple to put ARM in everything, including Macs and servers.

That said, I do hope that this gets ARM some much needed exposure. Not many people know them, even though they are most certainly using a device powered by an ARM processor. It is ubiquitous like that.

Software Engineering

A colleague of mine sent me an article by Tom DeMarco, one of the pioneers of structured analysis and also a strong believer in software engineering processes. Previously advocating metrics, metrics and more metrics in the past, he has come to realise that software engineering is rather a misnomer. In one sense it is engineering but in another sense, it is not. After decades of invaluable experience, he has come to one conclusion:

For the past 40 years, for example, we’ve tortured ourselves over our inability to finish a software project on time and on budget. But as I hinted earlier, this never should have been the supreme goal. The more important goal is transformation, creating software that changes the world or that transforms a company or how it does business.

Since I have also been writing code for a couple of decades since my humble beginnings with LOGO and BASIC, I have to say that I am of the opinion that it is very difficult to control the process of software creation – and it is a creative process, no doubt about it. However, as an engineer, I do believe that metrics is useful but only after the fact. What I mean to say is that metrics are only useful in documenting failures.

It is silly to try to control software creation during its inception and conception. You just need to hire the best people, give them the best tools and then hope for the best. Project managers who try to micro-manage the project will invariably fail because of the nature of the metrics used – they try to attribute success to certain values. Unfortunately, the success of a software rarely depends on the number of lines of code maintained, nor does it depend on the number of faults found.

After a project has come to a finish – and when I say finished, I mean that the people involved have come to a unanimous decision that they are happy with the state of the product at the time – that is when software metrics can be used to measure certain things. For example, it could be useful to measure individual contributions to the code base and identify good managers. It could also possibly measure the number of significant changes made to base code.

Anyway, I think that the video below is as good a metric as any for measuring software quality. I like the fact that the contributors seem to come in waves. I wonder if it correlates with real-world events in any way.