Their tech provides high performance (350MB/sec avail to the user) and low latency (worst case is like 2 or 3 micro seconds to send 512bytes to another node). So can pretty much kick the butt of gig-e.
We could probably do some really cool stuff with boosting performance (even further) when using SCI with some of the things I have in mind for multithreaded ndb kernel – basically changing some of the ways we do sending and receiving signals and improvements in shared memory stuff.
Big points from the presentation are:
- small messages sent using basic CPU instructions (it’s remote memory mapping)
- low cost to write to remote memory address
- raw worst case send latency for 8 bytes is about 210 nanoseconds
- no need to lock down or register memory
- TCP/IP processing not done in software
- Just LD_PRELOAD the library and it does your (user specified) Ip communication over the SCI interconnect
- can be fully redundant (dual cards, distributed switching)
- each card is about 5w of power (rather insignificant compared to other techs apparrently)
- really small time for failover
It’s also good to note that 10 gigabit ethernet doesn’t really buy you anything in reducing latency. SCI gives you both improved bandwidth and latency.
People looking into wanting more performance in MySQL Cluster should have a good look at it.
It’s also used in fighter planes – which make cool loud jet noises.
(err… i didn’t mean to sound to rah rah. hopefully i’ve just sounded like i think the tech is shiny)
UPDATE: corrected milli to nano.
UPDATE mk2: corrected nano to micro. Oh how I wish I just typed correctly to begin with. At least I’ve had some rest now :)