Neural Net Coprocessor

Neural Net Coprocessor

SSRLabs' Convolutional Neural Net Coprocessors are massively parallel accelerators for graph search and image and video analysis (vision systems) as well as for Deep Learning, Machine Learning and Artificial Intelligence. Due to their high-bandwidth interfaces (UHP) to and from memory and to host or other processors they scale out in a more linear fashion than any other convolutional neural net accelerator. We have equipped the Convolutional Neural Net Coprocessors with an easy-to-use API for the most common operations. They are accessible via open source application programming interfaces (APIs) such as TensorFlow and Caffe.

Floating Point Accelerator

Floating-Point Coprocessor

SSRLabs' Floating-Point Coprocessors are massively parallel accelerators for traditional HPC such as FEMs, Fourier Transforms and field solvers with an industry-leading performance. Due to their high-bandwidth interfaces (UHP) to memory and to other coprocessors and the fact that they are not constrained to SIMD operation they scale out in a more linear fashion than any other accelerator. The accelerator executes openCL and openACC instructions natively, making it very ease to use.

Very Large Capacity Memory

Very Large Capacity Memory

SSRLabs' very large capacity Memory ASICs are 3D stacked memory ASICs with 128 or 512 GB capacities and a footprint of less than 70 mm * 70 mm. Since they can be programmed to be volatile or non-volatile, they can be used as RAM or as mass storage, with or without a file system.

licensable IP

Licensable IP

While developing SSRLabs' pScale™ Coprocessors we have created non-core IP and synthesizable building blocks that we are interested in licensing out or selling. These building blocks are related to I/O, our UHP (Universal Host Port) and other components, such as DSPs, IoT Controllers and some image processing and math functions. We have also developed cryptography cores for SHA, AES and Elliptic Curves.