Twitter icon
Facebook icon
LinkedIn icon
Google icon
Reddit icon
StumbleUpon icon icon

FPGA Based Astronomy

Added to IoTplaybook or last updated on: 09/10/2020
FPGA Based Astronomy



If you look inside any imaging or radio astronomical telescope you will find several FPGAs. FPGA are selected as they are able to interface with sensors, perform any image processing and of course output the image data for further analysis. It is in the analysis of the images or series of images over years that new discoveries are made aiding our understanding of the universe.

Recently comet NeoWise (C/2020 F3) has been in the news as it makes it's 160 Million Mile close flyby of Earth.

Having worked on the FPGAs for an astronomy telescope a few years ago which has just had technical first light. I thought it would be good fun to create a smaller scale one which I could evolve over the years.

Things used in this project

Hardware components

MicroZed Embedded Vision Kits
ZedBoard MicroZed Embedded Vision Kits
× 1


ZedBoard MicroZed
× 1


Software apps and online services

Vivado Design Suite
Xilinx Vivado Design Suite

Image Sensor Selection

We have done a lot of image processing recently using MIPI cameras or HDMI cameras. To be able to keep up with rapidly changing scenes sensors with these interfaces (MIPI & HDMI) typically use a rolling shutter approach. This means as the lines are readout of the image sensor line by line while other lines are still capturing their image.

A rolling shutter can mean artifacts are introduced into the image due to each line being captured and read out at a slightly different time. Of course, reading outlines while other lines are still capturing the image has the potential to add noise to the image. Noise which could impact the final image and the scientific capabilities of the images produced, it is best, therefore, to avoid it if we can.

When used for science applications such as astronomy we often want to use a Global Shutter Sensor, this allows all lines of the imager to accumulate charge before being readout.

Rolling Shutter
Rolling Shutter


Global Shutter
Global Shutter

The only sensor I have which uses a global shutter is the OnSemi 1300 sensor used as part of the MicroZed Embedded Vision Kit.

This sensor offers another advantage in that we are able to control the integration time of the sensor. The integration time is the time provided for each pixel to accumulate photon (charge) before the image is readout. This means we can set the sensor to work with better low light conditions as we can increase the integration time and accumulate more photons in each pixel.

To get the best quality images it is best to cool the detector as well to reduce the dark current however, I think we should still be able to get some good images without cooling.

Ideally, we would use a greyscale sensor however, for this application the color will have to suffice, as it is all I have available.

Hardware Design

To get the design up and running we are going to be using Vivado to create the hardware design which interfaces with the Camera and also outputs an image so we can see the telescope output on HDMI if we desire. (We will use SW to Capture the Image as well). The HDMI will help us be able to set up and focus the camera as otherwise, we have no ability to see what the imager observes.

Rather helpfully we do not have to start from scratch, we can leverage the Embedded Vision Development Kit reference design. This is available from the Avnet GitHub

Using the Tag search for P1300, select the repository tagged embv_p1300C_mz7020_EMBV_20160223_205955

From here we can download a Zip or Clone the repository. However we as the example on line is created in version 2015.2 we need to update the design a little first, we will do this in the TCL scripts provided.

I am going to recreate this project in Vivado 2019.1 the last pre-Vitis version.

Two scripts are used for the project creation process

  • embv_p1300c.tcl - check appropriate licenses are available, and manages the build of the project
  • embv_p1300c_bd.tcl - creates the block diagram which consists of most of the design

But first, what do we need to update

  • Update the license check to remove the check for CFA
  • Replace the CFA with the Demosaic - as the CFA is no longer available
  • Update the clocking architecture for the AXI Stream and AXI Lite control network for the Demosaic to reflect only one clock is present

These files can be run in Vivado and the project will be created with the correct IP cores in place.

Change directory to the scripts directory, copy and paste the scripts attached into this project into the directory ProjectScripts

Once this is done source

source make_embv_p1300c.tcl

This will create the block diagram for the project which includes the demosaic block diagram.

While hard to see in the detail the image processing chain is as follows

This looks considerably more complex in the Vivado Block Diagram

Completed Block Diagram
Completed Block Diagram

Now we can build the project and generate the output products to allow us to with the software creation.

Software Development

Once the hardware description file is generated we can open Xilinx SDK and begin to develop the software.

The software must perform the following functions

  • Configure the image processing sensor
  • Configure the image processing pipeline to transfer images to the PS DDR
  • Configure the Image processing pipeline to display images over HDMI
  • Provide a simple GUI control to be able to change the exposure time
  • Save the images as a Portable Grey Map file

I want to record the data as a PGM file as the image data should be stored as close to RAW as possible so it does not introduce artifacts from compression.

Luckily to help us get started there is a Python driver software API we can use.

To set the exposure time we can use the function


Within the SW we also need to control the I2C IO expander to enable or disable the power supplies on the embedded vision kit.

The conversion to PGM is actually quite simple as the image processing pipeline helpfully converts the RGB format to YUV. The Y element is the pixel intensity which can be used to create a grey scale image when we write out to the file.

To be able to write out the file we need to use the following libraries in the BSP settings

Once this is enabled we need to be able to ensure the configurations are set correctly to allow us to write to the SD card

This will allow us to open and write to the files as necessary, the code below is used to set up the file system for the SD Card

static FIL fil; /* File object */
static FATFS fatfs; // Pointer to the filesystem object
static char FileName[50] = "image.pgm";
static char *SD_File;
char buffer[100] ;
UINT *ByteRead;
TCHAR *Path = "0:/";
u32 BuffCnt;
Res = f_mount(&fatfs, Path, 0);   //0 is the mounting option
SD_File = (char *)FileName;
Res = f_mount(&fatfs, Path, 0);   //0 is the mounting option
SD_File = (char *)FileName;

Res = f_open(&fil, SD_File, FA_CREATE_ALWAYS | FA_WRITE | FA_READ);
Res = f_lseek(&fil, 0);

What is great about the PGM file we can open it later in Matlab or Octave for analysis.

Combining with the Telescope

Of course, to be able to capture the images of NeoWise or any other planets, we need to be able to find it in the night sky. To be able to do this I used the website which showed the potential planets etc for my location (just outside London).

From a web retailer, I bought a relatively simple telescope and stand, mounting the camera to the telescope is a challenge. The Embedded Vision Kit uses a C Mount for the lens which needs to be connected to the viewfinder.

The back of the electronics on the telescope, the electronics are heavy so over need supporting on the telescope. to prevent falling off. An ideal future project for a 3D printer.


To capture the stars, planets and potentially Neowise which is visible from my garden I set up a little observatory (I borrowed my sons playhouse to keep it all warm and dry)

Sadly to date, the sky has either been too cloudy or rainy, so I will keep on trying over the week, and if I am successful comeback and update the project with some images.

Wrap Up

This is another great application for FPGA based image processing, the ability to work with the sensors for long integration times and process and store the image is a real benefit.

Of course in a real system, you might even stop the majority of the FPGA clocks while the image is being captured to prevent the additional noise from being injected into the captured image.

See previous projects here.

Additional Information on Xilinx FPGA / SoC Development can be found weekly on MicroZed Chronicles.

Schematics - Download - ProjectFile1 and Project File2.


Adam Taylor

  Adam Taylor

  71 projects • 887 followers

Adam Taylor is an expert in the design and development of embedded systems and FPGA’s for several end applications (Space, Defense, Automotive).

This content is provided by our content partner, an Avnet developer community for learning, programming, and building hardware. Visit them online for more great content like this.

This article was originally published at It was added to IoTplaybook or last modified on 09/10/2020.