Victor Krutul is the Director of Marketing for the Silicon Photonics Operation at Intel. He shares the vision and passion of Mario Paniccia that Silicon Photonics will one day revolutionize the way we build computers and the way computers talk to each other. His other passions are tennis and motorcycles (but not at the same time)!
I am happy to report that Fujitsu announced at its annual Fujitsu Forum on November 5th 2013, that it has worked with Intel to build and demonstrate the worldâ€™s first IntelÂ® Optical PCIe Express (OPCIe) based server. This OPCIe server was enabled by IntelÂ® Silicon Photonics technology. I think Fujitsu has done some good work when they realized that OPCIe powered servers offer several advantages over non OPCIe based servers. Rack based servers, especially 1u and 2u servers are space and power constrained. Sometimes OEMs and end users want to add additional capabilities such as more storage and CPUs to these servers but are limited because there is simply not enough space for these components or because packing too many components too close to each other increases the heat density and prevents the system from being able to cool the components.
Fujitsu found a way to fix these limitations!
The solution to the power and space density problems is to locate the storage and compute components on a remote blade or tray in a way that they appear to the CPU to be on the main motherboard. The other way to do this is to have a pool of hard drives managed by a second server â€“ but this approach requires messages be sent between the two servers and this adds latency â€“ which is bad. It is possible to do this with copper cables; however the distance the copper cables can span is limited due to electro-magnetic interference (EMI). One could use amplifiers and signal conditioners but these obviously add power and cost. Additionally PCI Express cables can be heavy and bulky. I have one of these PCI Express Gen 3 16 lanes cables and it feels like it weighs 20 lbs. Compare this to a MXC cable that carries 10x the bandwidth and weighs one to two pounds depending on length.
Fujitsu took two standard Primergy RX200 servers and added an IntelÂ® Silicon Photonics module into each along with an Intel designed FPGA. The FPGA did the necessary signal conditioning to make PCI Express â€śoptical friendlyâ€ť. Using IntelÂ® Silicon Photonics they were able to send PCI Express protocol optically through an MXC connector to an expansion box (see picture below). In this expansion box was several solid state disks (SSD) and Xeon Phi co-processors and of course there was a Silicon Photonics module along with the FPGA to make PCI Express optical friendly. The beauty of this approach was that the SSDâ€™s and Xeon Phiâ€™s appeared to the RX200 server as if they were on the mother board. With photons traveling at 186,000 miles per second the extra latency of travelling down a few meters of cable cannot reliably be measured (it can be calculated to be ~5ns/meter or 5 billionths of a second).So what are the benefits of this approach? Basically there are four. First, Fujitsu was able to increase the storage capacity of the server because they now were able to utilize the additional disk drives in the expansion box. The number of drives is determined by the physical size of the box. The 2nd benefit is they were able to increase the effective CPU capacity of the Xeon E5â€™s in the RX200 server because the Xeon E5â€™s could now utilize the CPU capacity of the Xeon Phi co-processors. In a standard 1u rack it would be hard if not impossible to incorporate Xeon Phiâ€™s. The third benefit is the cooling. First putting the SSDâ€™s in a expansion box allows one to burn more power because the cooling is divided between the fans in the 1U rack and those in the expansion box, The fourth benefit is what is called cooling density or, how much heat needs to be cooled per cubic centimeter. Let me make up an example. For simplicity sake letâ€™s say the volume of a 1u rack is 1 cubic meter and letâ€™s say there are 3 fans cooling that rack and each fan can cool 333 watts for a total capacity of 1000 watts of cooling. If I evenly space components in the rack each fan does its share and I can cool 1000 watts. Now assume I put all the components so that just one fan is cooling them because there is no room in front of the other two fans. If those components expend more than 330 watts they canâ€™t be cooled. Thatâ€™s cooling density. The Fujitsu approach solves the SSD expansion problem, the CPU expansion problem and the total cooling and cooling density problems.
Go to:https://www-ssl.intel.com/content/dam/www/public/us/en/images/research/pci-express-and-mxc-2.jpg if you want to see the PCI Express copper cable vs the MXC optical cable (you will also see we had a little fun with the whole optical vs copper thing.)
Besides IntelÂ® Silicon Photonics the Fujitsu demo also included Xeon E5 microprocessors and Xeon Phi co-processors.
Why does Intel want to put lasers in and around computers?
Photonic signaling (aka fiber optics) has 2 fundamental advantages over copper signaling. First, when electric signals go down a wire or PCB trace they emit electromagnetic radiation (EMI) and when this EMI from one wire or trace couples into an adjacent wire it causes noise, which limits the bandwidth distance product. For example, 10G Ethernet copper cables have a practical limit of 10 meters. Yes, you can put amplifies or signal conditioners on the cables and make an â€śactive copper cableâ€ť but these add power and cost. Active copper cables are made for 10G Ethernet and they have a practical limit of 20 meters.
Photons donâ€™t emit EMI like electrons do thus fiber based cables can go much longer. For example with the lower cost lasers used in data centers today at 10G you can build 500 meter cables. You can go as far as 80km if you used a more expensive laser, but these are only needed a fraction of the time in the data center (usually when you are connecting the data center to the outside world.)
The other benefit of optical communication is lighter cables. Optical fibers are thin, typically 120 microns and light. I have heard of situations where large data centers had to reinforce the raised floors because with all the copper cable, the floor loading limits would be exceeded.
So how come optical communications is not used more in the data center today? The answer is cost!
Optical devices made for data centers are expensive. They are made out of expensive and exotic materials like Lithium-Niobate or Gallium-Arsenide. Difficult to pronounce, even more difficult to manufacture. The state of the art for these exotic materials is 3 inch wafers with very low yields. Manufacturing these optical devices is expensive. They are designed inside of gold lined cans and sometimes manual assembly is required as technicians â€ślight upâ€ť the lasers and align them to the thin fibers. A special index matching epoxy is used that sometimes can cost as much as gold per ounce. Bottom line is that while optical communications can go further and uses light fiber cables it costs a lot more.
Enter Silicon Photonics! Silicon Photonics is the science of making Photonic devices out of Silicon in a CMOS fab. Also known as optical but we use the word photonics because the word â€śopticalâ€ť is also used when describing eye glasses or telescopes. Silicon is the most common element in the Earthâ€™s crust, so itâ€™s not expensive. Intel has 40+ years of CMOS manufacturing experience and has worked over the 40 years to drive costs down and manufacturing speed up. In fact, Intel currently has over $65 Billion of capital investment in CMOS fabs around the world. In short, the vision of IntelÂ® Silicon Photonics is to combine the natural advantages of optical communications with the low cost advantages of making devices out of Silicon in a CMOS fab.
Intel has been working on IntelÂ® Silicon Photonics (SiPh) for over ten years and has begun the process of productizing SiPh. Earlier this year, at the OCP summit Intel announced that we have begun the long process of building up our manufacturing abilities for Silicon Photonics. We also announced we had sampled customers with early parts.
People will often ask me when we will ship our products and how much they will cost? They also ask me for all sort of technical details about out SiPh modules. I tell them that Intel is focusing on a full line of solutions â€“ not a single component technology. What our customers want are complete Silicon Photonic based solutions that will make computing easier, faster or less costly. Let me cite our record of delivering end-to-end solutions:
Summary of Intel Solution Announcements
January 2013: We did a joint announcement with Facebook at the Open Compute Project (OCP) meeting that we worked together to design disaggregated rack architecture (since renamed RSA). This architecture used IntelÂ® Silicon Photonics and allowed for the storage and networking to be disaggregated or moved away from the CPU mother board. The benefit is that users can now choose which components they want to upgrade and are not forced to upgrade everything at the same time.
April 2013: At the Intel Developer Forum we demonstrated the first ever public demonstration of IntelÂ® Silicon Photonics at 100G.
September 2013: We Demonstrated a live working Rack Scale Architecture solution using IntelÂ® Silicon Photonics links carrying Ethernet protocol.
September 2013: Joint announcement with Corning for new MXC and ClearCurve fiber solution capable of transmission of 300m with IntelÂ® Silicon Photonics at 25G. This reinforced our strategy of delivering a complete solution including cables and connectors that are optimized for IntelÂ® Silicon Photonics.
September 2013: Updated Demonstration of a solution using Silicon Photonics to send data at 25G for more than 800 meters over multimode fibers â€“ A new world record.
Today: Intel has extended its Silicon Photonics solution leadership with a joint announcement with Fujitsu demonstrating the worldâ€™s first IntelÂ® Silicon Photonics link carrying PCI Express protocol.
I hope you will agree with me that Intel is focusing on more than just CPUs or optical modules and will deliver a complete Silicon Photonics solution!