You need to improve performance of your data center, accelerate Big Data applications and cut execution time for your HPC users? You are not alone. SSRLabs has accelerators for Big Data and HPC as well as advanced memories that enable you to improve performance while reducing operating costs. Don't fall for software solutions when hardware is required. Do you remember Syncronys SoftRAM RAM Size Doublers from 1995?
We are your partner in your quest to offer better solutions in Big Data, HPC and Cloud Computing for your customers.
Big Data: Big Data is all about keeping data in memory - not on disk. If your supplier tells you that they have SSDs instead of hard disks, you know that they don't understand your requirements. Ask us for our advice on how to improve the efficiency and performance of your data center. Most Big Data applications benefit from very large main memory, so naturally we suggest deploying our vlcRAM in conjunction with an UHP-enabled processor. SSRLabs has the right solution for your Big Data applications that are not served well by MapReduce schemes.
Cloud Computing: Ultimately, Cloud Computing refers to an interconnected set of data centers storing your data and executing your applications. Behind every data center there are lots of servers - and the more energy-efficient and instruction-efficient they can be made, the better it is for every single user. SSRLabs has the right solutions for numerically intensive applications, for Big Data applications and for any Artificial Intelligence, Machine Learning and Deep Learning applications in your private, public or hybrid cloud.
Machine Learning and Deep Learning: Both of these rely on emulating the human brain in its basic structure. They are based on what is called a convolutional neural network, or CNN. If the interconnects can be changed (like synapses that are created when the human brain learns something by being trained) they are called neuromorphic. Our accelerator is a true neuromorphic CNN (or nCNN), and it does not need to rely on emulating one. Thresholds for spiking and triggering can be set, and pre-learned coefficients can be loaded and stored. This is - like many other features - unique to our nCNN. As a result, we suggest that you look into our nCNN with the vlcRAM as accelerator to see if that combination fits your application space
HPC: A quick look at the world's fastest supercomputers reveals a number of issues. Peak theoretical performance and measured performance differ quite substantially. For BLAS, an embarrassingly parallel problem, the efficiency on Tianhe-2 is a mere 62%, and for other computational workloads the efficiency is even lower. However, among supercomputers this is one of the better levels of efficiency. Other supercomputers fare far worse - particularly those that deploy SIMD accelerators such as GPGPUs. Simple meshes inside accelerators don't work well either, as Tilera's lack of success has demonstrated. If we have a look where we are at today and where the DOE's ExaFLOPS Challenge wants the HPC industry to be, let's just look at the numbers. Today's highest performing supercomputer is Tianhe-2 with about 34 PFLOPS of numeric performance at a power consumption of roughly 18 MW. That turns out to deliver about 1.889 GFLOPS/W. In other words, Tianhe-2 delivers nearly 2 Billion floating-point operations per second per Watt of electricity it consumes, running BLAS as a benchmark. The DOE asks for 1 ExaFLOPS (that is 10^18 floating-point operations per second) at a total allowable power consumption of 20 MW, and presumably for a more normalized mix of benchmarks. That boils down to 50 GFLOPS/W. In other words, the energy-efficiency of today's supercomputers must improve by a factor of more than 25 to fulfill the ExaFLOPS Challenge. Not even Moore's Law - if we assume it will continue to be true - will afford us that until the 2020 deadline. It is clear that simply banking on Moore's Law won't get us there. Architectural changes are required, and that is what SSRLabs does.