OUR ProductS

Our Products

AzureBlade K340I Intelligent Accelerator Card

Introducing the AzureBlade K-Series M.2 Accelerator Card, powered by the ultra-compact AE7100 chip based on RPP Core technology. The AzureBlade is ideal for LLM and neural network inferencing wherever space is limited and low power is prioritized. It is fully programmable and compatible with CUDA language and ONNX models to support a wide range of AI applications. With up to 25.6 TOPs of computation and memory bandwidth of 60GB/s, the AzureBlade K-Series is perfect for running Llama2-7B, Stable Diffusion, and other similar Generative AI models, all in a form factor suitable for AI PC, Industry 4.0, security, medical imaging, and many other suitable applications.

The main parameters:

AE7100

AzurEngine’s most compact chip, the AE7100 based on RPP Core technology, achieves high-performance with its powerful AI parallel computing capability. It can run up to 32TOPs performance in only 17x17mm of area, demonstrating ultra-high efficiency. Featuring the innovative RPP architecture, the AE7100 is ideal for Generative AI applications utilizing LLMs that require a low power footprint and low thermal overhead.

The main parameters:

AzureBlade M520N AI Intelligent Accelerator Card

Introducing the AzureBlade M-Series Accelerator Card, powered by the AE8100 chip based on RPP Core technology, providing high-performance neural network inferencing in any compatible PCIe slot. It is fully programmable and compatible with CUDA language and ONNX models to support a wide range of AI and parallel computing applications. With up to 32 TOPs of computation capability, memory bandwidth of 60GB/s, and 32-channel video decoding, the AzureBlade M-Series is perfect for running Generative AI, deep convolutional NN models, and DSP/ISP algorithms in the edge server.

The main parameters:

AE8100

AzurEngine’s first chip, the AE8100 based on RPP Core technology, is a high-performance parallel processor capable of running 32TOPs while consuming less than 15W of power. Supporting up to 32 channels of streaming video, the AE8100 can execute video decoding and inferencing simultaneously, supporting popular NN models like Yolo v5 efficiently. This compact, low-power chip is ideal in edge server slots where high-performance is needed, but power and thermal overhead must be limited.

The main parameters:

AzureEdge Edge Servers

The AzureEngine Edge Servers are designed for edge computing and industrial automation. It features the AzureBlade M-Series Accelerator Card, providing powerful computing performance for a broad variety of AI and parallel compute applications. It utilizes a power-friendly 8-core Arm SoC as host. The AzureEdge Edge Servers are general high-performance compute edge server that can be customized for Industry 4.0, robotics, autonomous driving, content filtering, signal processing, and many other fields.

The main parameters:

GP-GPU based on RPP architecture

R8-An innovation GP-GOU based on RPP chip architecture

  • The Reconfigurable Parallel Processor (RPP) architecture transfers programmability from the time domain to the space domain. Under the RPP architecture, instructions are distributed to different PEs, and data flows through the PEs sequence to realize the program’s execution in space.
  • RPP is suitable for programs with large amounts of data parallelism.
  • With the innovation of AzurEngine, our RPP architecture has improved efficiency to a whole new level.