SSRLabs - Neural Net Coprocessor

SSRLabs has developed a dedicated convolutional neural net coprocessor to improve a variety of computational tasks in modern servers. We support convolutional neural net applications for data mining, graph search, vision systems, analysis of multi-dimensional datasets, mapping of the human brain and Artifical Intelligence (Deep Learning, particularly the inference part) at better performance per Dollar spent - both in capital expenses and in lifetime operating costs - and better energy-efficiency than existing solutions. Unlike other coprocessors or accelerators SSRLabs' coprocessors are massively parallel accelerators that will automatically share and distribute the computational load.

These processors outperform FPGA-based and any other solutions that we know of, including GPGPU compute, x86-64 and dedicated other coprocessors. For decision-making support, a convolutional neural net coprocessor helps comb through large data sets and addresses the "Big Data" problem that many companies are facing these days. Graph search and AI are two prominent examples of those applications. Our accelerators scale out better than others due to a novel I/O subsystem that supplants QPI and is higher bandwidth than Gen-Z. We call this interface UHP for Universal Host Port as it connects processors to processors, accelerators and to our memory ASIC, the vlcRAM.

These comprehensive subsystems include dedicated coprocessors, firmware, software and APIs as well as SDK plugins. Please email us for a list of supported APIs.