Accelerated Machine Vision is our synonym chosen for industrial image processing solutions using state-of-the-art FPGA technology. Our solutions are based on a vendor-independent modular open system architecture (MOSA) called MicroTCA. The 19-inch rack-mount systems provide infrastructure for clocking, triggering and synchronization of cameras without the need for external circuits. The redundant design of hot-swap capable infrastructure (switching, power supply and cooling) guarantees high availability and maintainability. Compared to traditional PC/Server-based computer vision systems our platform offers more flexibilitylower development costs and unlimited scalability with any number of cameras.  
Depending on the end application, we solve image processing tasks with Artificial Intelligence (AI)OpenCVVisual Applets®or combinations hereof. The algorithms are hardware accelerated by FPGAs which enables time critical execution on the edge device. This results in a latency advantage, which is particularly suitable for real-time applications.
Visit our Website for a deep dive into the world of FPGA accelerated machine vision and image processing. 


  • Quality inspection
  • Line clearance
  • Image classification
  • Public safety

Integral component of our accelerated vision platform is the NAT-AMC-ZYNQUP-VISION, a powerful image processing board based on the Xilinx ZYNQ MPSoC Ultrascale+. This board is ideal for machine vision applications as it brings both the required flexibility and the performance by having both an on-die CPU and FPGA connected in a very efficient way through internal low-latency DMAs. The FMC frontend allows connecting a variety of industrial camera interfaces, such as GigE-VisionUSB and HDMI.

Key Features: 

  • Xilinx ZYNQ Ultrascale+ MPSoC
  • Quad GigE-Vision® input camera interface with PoE
  • HDMI 2.0 4K Input and Output
  • Quad Channel DDR4-2400 for fast image buffering
  • PCI-Express Gen 3.0 (x8)

We provide a software architecture as a universal development platform. Depending on the type of developer, one can either develop algorithms completely graphically with Visual Applets® or program the FPGA at RTL level. The hybrid structure of processor and FPGA allows an efficient combination of C++ code and hardware acceleration. For example, use the HLS toolbox and accelerate OpenCV® algorithms with the FPGA. For the implementation of deep learning convolutional networks (CNN, DNN) for object detection and classification we support the conversion of TensorFlow and Caffe based networks to our platform.