I got the opportunity to attend a short training and workshop on computing on a massive scale conducted by SGI. The speaker was a highly technical guy, Michael A. Raymond, who delivered a very good presentation. Personally, I learned some things and got some of my queries answered.
He mentioned that the main issue in large scale computation today is communication and not computation. While I can understand his logic in saying that, I am hazarded by the thought that these two things tend to take turns being the problem. Once communication problems are slowly chipped away, the computation problem will recur.
Actually, I would say that the computation problem is already recurring – with the introduction of heterogeneous computation into the mix. These days, lots of computation are off-loaded onto computation engines, which may be made up of custom microprocessors, FPGAs or even DSPs and GPUs. Not only that, the number of cores available may not be a nice friendly number – like 6 cores in newer Intel/AMD chips and 7 cores in a IBM Cell processor.
These little things do tend to throw a wrench in a works a bit.
I wanted to attend this talk in order to learn a bit more about multi processing because my next generation of AEMB will definitely feature more and more parallelism features and I wanted to get an overview of how things are like in this area. However, much of the talk focused on HPC applications, which may not be what I am interested in. That said, I still learned some interesting things about how they solved certain problems.
I intend to introduce multi-process, in addition to multi-threading already on the AEMB. It is the next logical step to improve the processor before moving it up to multi-core. I will probably need to build in some sort of fast inter-core interconnect to enable multi-core processing on multi-threaded and multi-processed AEMBs. My littlest processor may just end up being the world’s leader in on-chip parallelism.
Fun days ahead!