• Home
  • About
    • James Thesken photo

      James Thesken

      James' World

    • Learn More
    • Email
    • LinkedIn
    • Github
    • Youtube
  • Posts
    • All Posts
    • All Tags
  • Projects

FPGA Development using Python

27 Jun 2019

Reading time ~1 minute

I was fortunate enough after graduation to have an internship with General Dynamics Mission Systems on Kauai doing R&D. Even more so to have been evaluating computer vision programs on the cutting edge of computing. Heterogenous computing systems are becoming a vision for the future as Moore’s Law is seeing an end.

I worked closely with the Xilinx ZCU104 and PYNQ-Z1 evaluation boards and learned a lot about the new framework being developed by Xilinx to bridge the gap between high and low level programming. One of the biggest ideas being reconfigurability and reuasability - something that has yet to be seen in the world of FPGA’s.

Some of the work I did can be found on my Github.

One of the most interesting examples of hardware acceleration was in running optical flow, which is normally used as a tracking mechanism in computer vision. Running on the general purpose ARM processor saw less than a frame per second. Using the hardware overlay resulted in ~60+ FPS. A 60x increase in processing power.

Hardware acceleration comparison for optical flow.

I created a demonstration to track moving pedestrians on a busy sidewalk, which would have run real-time, however there are still some issues Xilinx will need to work out.

  • Loading multiple hardware accelerated overlays (eg. optical flow and 2D filtering simultaneously.) The memory management class (xlnk) couldn’t be loaded twice.
  • HDMI I/O for the ZCU104 when using the PYNQ-ComputerVision library.
Detected pedestrians on a busy sidewalk.

Once that is fixed everything should work great.

  • Demo video of 2D filters.

  • Tracking demonstration.



embedded-systemsmachine-learning Share Tweet +1