Undergraduate Projects 2006

 

List of Undergraduate Projects 2006

  1. Mike Aitken FPGA-Based Data Acquistion System for Ultrasound Tomography
  2. Roger Deane Developing a Pulsar Search Algorithm
  3. Rethabile Khutlang Design and Implementation of RF and Microwave Filters Using Transmission Lines
  4. Gunther Lange Investigation into Pattern Synthesis and Target Tracking Techniques
  5. Dinè le Roux Predicting the Accuracy with Which Upper Air Wind Dynamics can be Measured by the Doppler Tracking of a Pilot Balloon
  6. Onkgopotse Isaac Madikwane Investigation, Design, Construction and Testing Broadband Antenna for Radio Telescope Receiver
  7. Jason Manley Low Cost Penguin RFID Reader with GSM Uplink
  8. Peter Leonard McMahon High Performance Reconfigurable Computing for Science and Engineering Applications
  9. Joseph Milburn An Algorithmic State Machine Simulation Package for Teaching Purposes
  10. Tsepo Sadeq Montsi Design and Implementation of a Parallel Pulsar Search Algorithm
  11. Amy Phillips Comparison of Imaging Radar Scenes at 2,5m and 6cm
  12. Ka-Chuen Kenny Wai Investigating the Difference of Digital Elevation Models between Shuttle Radar Topographic Mission (SRTM) & Photogrammetry Techniques
  13. Andrew Woods An FX Software Correlator for the Karoo Array Telescope Project

Back to list of Undergraduate Projects

 

Abstracts of Undergraduate Projects 2006

1. Mike Aitken: FPGA-Based Data Acquistion System for Ultrasound Tomography

Abstract:

Ultrasound Tomography involves the acquisition and analysis of large amounts of data, both of which must be done at high speed in order to be implemented in a mobile product. FPGA technology promises to provide the necessary parallel processing capabilities in order to accomplish various strenuous processing procedures. At the same time the technology allows a system to be designed and prototyped quickly and cost-effectively.

However, an FPGA device is limited by the speed at which it can acquire raw data. To facilitate ultrasound tomography, the external circuitry is responsible for acquiring high resolution data at high speeds that are sufficient in order to allow effective signal processing. The circuitry must also perform the opposite function, that being the ability to transform high resolution digital data into analogue signals at high speed.

This thesis sets out to investigate the recent developments in FPGA technology, looking specifically at the benefits that FPGAs deliver to embedded system design. It then moves on to suggest a software/hardware configuration that will provide a cheap, fast-tracked working demonstration of an ultrasound tomography system. It takes the reader through the process of building the entire system:

Lastly, and most importantly, since this project merely lays the foundation for further exploring FPGA technology, a range of possible project extensions are suggested.

Back to Top
Back to Group Theses

 

2. Roger Deane: Developing a Pulsar Search Algorithm

Abstract:

Pulsar searching is a highly intensive computational process involving enormous data sets and iterations through unknown parameters.

This thesis aims at developing a simple algorithm capable of recovering the pulsar signal from below the receiver noise level.

The relevant theory required to achieve this is researched. The algorithm has been coded in C to gain a speed advantage over scripting languages. It accounts for interstellar dispersion and performs an array of procedures to increase probability of detecting the pulsar signal.

Back to Top
Back to Group Theses

 

3. Rethabile Khutlang: Design and Implementation of RF and Microwave Filters Using Transmission Lines

Abstract:

RF and Microwave filters can be implemented with transmission lines. Filters are significant RF and Microwave components. Transmission line filters can be easy to implement, depending on the type of transmission line used. The aim of this project is to develop a set of transmission line filters for students to do practical work with. Microwave applications. The characteristic impedance and how easy it is to incise precise lengths are two important characteristics of transmission lines; thus they are used to investigate which transmission line to use to implement filters.

The first part of this project looks into which transmission line to use to erect a filter. Then the different filter design theories are reviewed. The filter design theory that allows for implementation with transmission lines is used to design the filters.

Open-wire parallel lines are used to implement transmission line filters. They are the transmission lines with which it is easiest to change the characteristic impedance. The resulting filters are periodic in frequency. The open-wire lines can be used well into the 1 m wavelength region. For characteristic impedance below 100 Ohms, the performance of open-wire lines is limited.

Back to Top
Back to Group Theses

 

4. Gunther Lange: Investigation into Pattern Synthesis and Target Tracking Techniques

Abstract:

The core objective of this thesis is to investigate various methods of target tracking and interference canceling within the field of passive radar. In particular, these methods were analysed by means of simulations within Matlab using half-wave dipoles as elements of a linear array.

This thesis begins with a brief background of a passive radar system noting the importance of beam forming and null placement within such a system. This is followed by a description of the main objectives and a detailed summary of each chapter of this report. Literature covering all aspects considered important and applicable to the topic of this thesis will be reviewed in detail.

The work done in this thesis is introduced in a modular fashion, beginning with the description of a single half-wave dipole element. Thereafter, a second element is added, forming an array. By means of simulations, ways of forming a beam in a particular direction and fixing a null in another direction were investigated. In addition, phase monopulse angle sensing techniques [4] were introduced and applied to the two element array in order to investigate methods of target tracking.

Next, a third antenna element was added to the array. In this arrangement, methods of independent null steering were considered. Furthermore, phase monopulse angle sensing techniques [4] in the presence of a target and an additional inference signal were reinvestigated and discussed, also using the three element arrangement.

Finally, conclusions were drawn for the core findings of this thesis and areas of possible future work were considered.

Back to Top
Back to Group Theses

 

5. Dinè le Roux: Predicting the Accuracy with Which Upper Air Wind Dynamics can be Measured by the Doppler Tracking of a Pilot Balloon

Abstract:

Instructions to execute this project were given by Mr Ian Robertson of Tellumat Pty (Ltd) in May 2006, under the guidance of Professor Mike Inggs. The need for the project arose when new ways of tracking weather balloons (otherwise known as pilot balloons) were being investigated by Tellumat on behalf of the South African Weather Services. Mr Robertson’s instructions were to:

Twice a day weather balloons are released from weather stations throughout South Africa. By tracking the position of a weather balloon, wind speed and wind direction of upper-air regions can be calculated. This wind information is vital in producing weather forecasts. At present, the South African Weather Service is using a system where balloon positions are recorded by an operator using a theodolite. This method of balloon tracking is highly unreliable, and thus an alternate method of tracking needs to be implemented.

A better option would be to use Doppler tracking to track the balloon. With Doppler tracking however, there are several factors which contribute to the tracking system’s inaccuracies. The weather service requires an indication of the extent of the deviation of the calculated path and velocity of the balloon from its actual path and velocity. If the deviation is too great, the Doppler tracking method cannot be used.

The objective of this thesis is to predict the accuracy with which a weather balloon can be tracked using a Doppler tracking technique, and thus predicting the accuracy with which wind velocities (speed and direction) can be measured.

Back to Top
Back to Group Theses

 

6. Onkgopotse Isaac Madikwane: Investigation, Design, Construction and Testing Broadband Antenna for Radio Telescope Receiver

Abstract:

An antenna forms the interface between the free space and the transmitter or receiver. The choice of an antenna normally depends on factors such as gain and the bandwidth an antenna can offer. Signals from satellites travel thousands of kilometres to the earth and as the Friis equation (Appendix A) shows, they will only be detected as weak signals. Under these conditions, high gain antennas are required. The focus here is on low cost antennas and since the standard ones like the half wave dipole and the folded dipole cannot offer the much needed gain and bandwidth, the attention is thus shifted to the Yagi-Uda and the log-periodic dipole array antennas.

The gain of the Yagi antenna can be increased by approximately 1 dB for every additional director. However, properties such as the radiation pattern, sidelobe level and input impedance have to be taken into account. The question that comes to the fore is then; how many directors will suite an antenna with certain properties? To encompass all these factors, optimization software packages for the Yagi antennas have been developed over the years. Some of these software packages use the genetic algorithm to find the optimum length for the elements and their spacing. The algorithms employ the method of moments (MOM) based electromagnetic codes to compute current distributions on the antenna structure while taking into account the mutual coupling between elements. Yagi antennas have narrow bandwidths of the order of 2% when designed for high gain.

On the other hand, log-periodic dipole array (LPDA) antennas offer a wider bandwidth and can have gains as high as 10 dB. The dipoles are connected to the source using a twin transmission line in such a way that the phase is reversed at each connection relative to the adjacent elements. When connected this way, the bandwidths of the dipoles add-up to give a broader bandwidth. The transmission line is often replaced with a pair of metal boom structures separated by the dielectric material. R. L Carrel, who conducted intense studies on log-periodic antennas, has prepared curves and also devised the formulas for calculating parameters such as the required number of dipoles and their spacing, that are invaluable for the design of the LPDA.

Coaxial cables are used for feeding the outdoor antennas and we use either the radio frequency (RF) transformers or baluns for impedance transformation between the antenna and the cable. Mismatches form standing waves on the cable which can add-up constructively or destructively and hence distort or even wipe-off the signal of interest. The RF transformers and baluns also prevent cable radiation which has undesirable effects on the performance of an antenna.

If the main priority was the gain, then the Yagi antenna would be the best option. In the end, compromises had to be made between the gain and broader bandwidth by selecting the log periodic dipole array antenna. The other advantage of this antenna is that its input impedance can be set to the desired value by selecting the appropriate diameter for the dipole elements. Aluminium tubes and rods where chosen for the LPDA as it does not rust. The diameter of these rods and tubes were chosen so as to incorporate the compactness as well as the ease of assembly for the antenna elements. The dipole elements are attached to the boom structures by using pop rivets. Too thin dipole elements would result in high parallel rod feeder impedance. This would have adverse effect on the spacing of the booms and therefore violate our desired compactness.

Too much boom spacing would require a longer section of the coaxial cable insulation to be stripped. Signal power is lost due to cable radiation at high frequencies and imagine how much losses will be incurred if a longer section of the coaxial cable was stripped. For this LPDA, the cable used is RG-142B/U and has double shielding. The drawback of this cable is its stiffness but otherwise it has less signal attenuation. It was never going to be easy to connect the RF transformer directly to an antenna and so the coaxial cable was used again at the secondary side. This cable had to be kept short so that cable radiation could be minimized. The transformer used is TCN1-23 by Mini-Circuits® and is a surface mount miniature transformer. It was mounted onto the veriboard so as to facilitate the connections to the cables.

After going through all the exhaustive design and construction phases, what remained for the designer was to determine what has been achieved by conducting the tests. The first test was determining the bandwidth of the LPDA. This test was done by measuring the scattering parameters of the antenna using the network analyzer. The power transmitted to an antenna was referenced to 0 dB and hence the plot displayed in the network analyzer represented the reflected power as a function of frequency. At the band where the antenna is operating, only a tiny fraction of power is reflected back to the analyzer indicating that most of the power has been radiated by the antenna. The antenna covers frequencies of 1.46 GHz to 1.77 GHz as indicated by Figure 5.5.

The final test was determining the radiation pattern of the LPDA. To obtain a well defined radiation pattern, tests should be conducted inside the RF anechoic chamber. When the antenna is inside this chamber, it cannot be influenced by any surface or objects that can re-radiate electromagnetic energy. Since there was no chamber that could be used, there was an influence by some other surfaces resulting in distortion of the radiation pattern. The radiation pattern of the LPDA can be found in Figure 5.10. To conduct this test, a source of radiation was required and hence the half-wave dipole had to be built. It was situated in such a way that its polarization was the same as that of the LPDA in order to receive energy efficiently. As indicated by the radiation pattern, the LPDA is a directional antenna. In order to receive signals efficiently with this antenna, then it should be pointed directly to the transmitter.

Apart from the slightly reduced bandwidth of the LPDA, the objectives have been met. Corrections can be made to the bandwidth by replacing the rear dipoles with the slightly longer ones. This will ensure that the antenna starts to receive at frequencies of about 1.4 GHz.

Back to Top
Back to Group Theses

 

7. Jason Manley: Low Cost Penguin RFID Reader with GSM Uplink

Abstract:

This project designs and implements an electronic system for automatically logging the movements of penguins on Robben Island using RFID and GSM technologies. The design is systematic, from ground-level upwards.

We discuss the shortfalls of an existing system which is in place on the island for this purpose and propose possible solutions. The selected modular solution features an uplink device with a generic interface for logging data from multiple connected peripherals, and two interconnected RFID readers. A low-cost prototype system is constructed and its performance is evaluated for installation on the island as a replacement for the existing system.

We conclude that RFID is a technology offering many benefits, but careful system implementation is necessary if the full benefit of the technology is to be extracted. Recommendations are made as to how the replacement system may be further improved by using additional antennas or a different RFID interface.

Back to Top
Back to Group Theses

 

8. Peter Leonard McMahon: High Performance Reconfigurable Computing for Science and Engineering Applications

Abstract:

This thesis investigates the feasibility of using reconfigurable computers for scientific applications.

We review recent developments in reconfigurable high performance computing. We then present designs and implementation details of various scientific applications that we developed for the SRC-6 reconfigurable computer. We present performance measurements and analysis of the results obtained.

We chose a selection of applications from bioinformatics, physics and financial mathematics, including automatic docking of molecular models into electron density maps, lattice gas fluid dynamics simulations, edge detection in images and Monte Carlo options pricing simulations.

We conclude that reconfigurable computing is a maturing field that may provide considerable benefit to scientific applications in the future. At present the performance gains offered by reconfigurable computers are not sufficient to justify the expense of the systems, and the software development environment lacks the language features and library support that application developers require so that they can focus on developing correct software rather than on software infrastructure.

Back to Top
Back to Group Theses

 

9. Joseph Milburn: An Algorithmic State Machine Simulation Package for Teaching Purposes

Abstract:

Introduction:
A processing task can be performed by a series of register micro-operations controlled by a sequencing mechanism. The micro-operations can be represented as a hardware algorithm with a series of routines. Deriving the hardware algorithms to perform specific processing tasks is the biggest and most creative challenge in digital logic design. One of the approaches used in overcoming this challenge is the algorithmic state machine (ASM) chart, which has special properties tying it to the hardware implementation of the algorithm it represents.

Objectives:
This project aims to develop a simulation environment with a graphical user interface that offers university students grounding in ASM chart theory. The package should:

ASM Theory:
An ASM consists of three types of elements:

Choice of Development Environment:
The ASM diagram was implemented using a dynamic array or list of elements, thus C++ was chosen over Java because of its provision for arbitrary memory access. The development environment chosen was wxWindows, because:

Software Design:
The Unified Modelling Language, UML, was used to model the software. The software design process entailed specifying use-case models, Class- Responsibility-Collaborator or CRC cards, a class relationship model and object behaviour models (interaction and state diagrams).

Conclusions and Recommendations:
A graphical user interface (GUI) allowing an ASM diagram to be built, experimented with and verified was successfully implemented. The simulation package achieves its primary goal of demonstrating ASM’s to afford students a clear understanding of how they work.

The feature set of the final simulation package is rather limited. Listed future developments are as follows:

Back to Top
Back to Group Theses

 

10. Tsepo Sadeq Montsi: Design and Implementation of a Parallel Pulsar Search Algorithm

Abstract:

This Thesis illustrates the design and implementation of a Parallel Pulsar Search Algorithm.

Pulsars are a rare form of Neutron Star. They are of great interest to astronomers due to their unique properties. Pulsars rotate rapidly and emit beams of radio energy. If the axis of rotation and the direction of the radio beams do not align, then from Earth these beams are perceived as lighthouse like pulses. Due to an interaction with the Interstellar Medium called Dispersion, these beams are generally undetectable from a direct observation.

A Pulsar Search Algorithm takes in observation data from a Radio Telescope, and counteracts the effects of Dispersion on this data, it then performs an analysis in the frequency domain to detect possible Pulsars. An algorithm that performs these manipulations is computationally intensive, as such a method of speeding up this computation is desirable.

One method of improving speed is running the algorithm simultaneously on multiple computers, i.e. Parallel Computing. After analysing a sequential Pulsar Search Algorithm, a SIMD and MISD parallel Pulsar Search Algorithm were designed. These were implemented in a program and then tested on the KAT cluster to measure their performance. Both proved to be faster than the sequential algorithm, however the MISD implementation proved to be the fastest.

Back to Top
Back to Group Theses

 

11. Amy Phillips: Comparison of Imaging Radar Scenes at 2,5m and 6cm

Abstract:

Objectives

The objective of this report is to interoperate and compare images from different Synthetic Aperture Radar (SAR) sensors and decide which images are better for detecting different features of the landscape. The images cover the Bot Rivier Estuary near Hermanus in the Western Cape of South Africa. The sensors used to retrieve images are the South African SAR sensor (SASAR 1) and the European Radar Sensors (ERS 1 and 2). Optical images obtained from the Department of Surveys and Mapping and Google Earth are used to confirm my findings.

Background

The SASAR 1 sensor operates at VHF (141 MHz) and has all four polarisation settings i.e. HH, HV, VH, VV. The ERS sensor operates in the C – band (5.3 GHz) and has only VV polarisation.

There are a number of factors to consider when interpreting the objects in a radar image. These include Shape, Size, Tone, Texture and Association. If used in the right way these can be clues to identifying objects and features. To further ease the process of image interpretation the images are registered. In this process all images from the different sensors are mapped onto one co-ordinate system where each pixel corresponds to a particular co-ordinate. The optical images are also mapped to this co-ordinate system. This allows me to compare the images pixel by pixel.

SAR Sensors

An SAR sensor is a system and all systems have certain parameters. In this case I look at wavelength and polarisation. These two parameters will affect the way the electromagnetic wave interacts with the surface. They must be chosen according to the features one wishes to detect. For example, a longer wavelength is able to penetrate deeper into the surface volume giving insight into the vegetation type. Distortion in radar images can occur in numerous ways. Most likely the largest cause of distortion is the radar signal arriving back at the sensor at the wrong time i.e. earlier or later than it should relative to the surroundings. This can result in layover or foreshortening effects.

The Target will also have parameters which will affect the way that energy is sent back to the sensor. These include surface roughness, the dielectric constant of the surface material and size of the target compared to the wavelength. These parameters can cause scattering or Bragg resonance.

The Main purpose of SAR technology is its ability to improve the azimuth resolution. The azimuth resolution is independent of the altitude of the sensor and depends only on the size of the antenna. A smaller antenna will result in a finer resolution.

Doppler shift theory is an alternative way of calculating the azimuth resolution. The sensor uses the Doppler echoes to determine the distances between points in the ground.

Imaging Processing

The Image processing of the SASAR 1 and ERS images was done by Minette Lubbe at the CSIR. The two methods used are Principal Component Analysis and Image fusion.

Principal component Analysis involves a linear transform. The covariance matrix from the original data is used to find the Eigen values and Eigen vectors. The Eigen values are the coefficients for the linear transform. The Highest Eigen value results in the first principal component. It has the highest variance and hence it contains the most information about the original data set. The first four principal components are the most relevant as the contain ninety nine percent of the original data.

Two different types of image fusion where done, Standardised Principal Component Analysis and Intensity, Hue and Saturation. Both these techniques involve substituting one of the image channels with a high resolution optical image.

Interpretation and Comparisons

This is the analysis part of the report. I look first at the man made features. These include Buildings, power lines and roads.

Individual buildings and small clusters are detected in the SASAR VHF images. Urban areas such as Kleinmont and Hermanus are detected in the SASAR and ERS images. Roads are generally undetected. They are only detected by the SASAR sensor when by lined be trees. Power lines are seen as bright stripes and pylons are seen as bright spots. These are only detected in the SASAR images.

Natural features such as vegetation are not detected by the ERS sensor and seen only by the SASAR sensor. This is because the c-band frequency of the ERS sensor does not allow the signal to penetrate the surface volume.

Water bodies are well detected by the ERS sensor as ocean backscatter improves as the frequency increases.

Relief such as mountains, ridges and ravines can be made out clearly in the ERS images. The higher frequency is more sensitive to these kinds of changes in the terrain. The speckle caused by constructive and deconstructive interference is more evident in the SASAR VHF images because of the lower frequency.

Conclusions

The SASAR sensor can be used for detecting man made objects such as buildings, urban areas and Power lines. It can also be used for detecting various types of vegetation. Ambiguities may occur when classifying areas of dense forest and urban areas as they appear very similar in the SASAR images. A koppie may be confused with an individual building.

The ERS sensor would be the appropriate choice for detecting water bodies, shore lines and types of relief.

Back to Top
Back to Group Theses

 

12. Ka-Chuen Kenny Wai: Investigating the Difference of Digital Elevation Models between Shuttle Radar Topographic Mission (SRTM) & Photogrammetry Techniques

Abstract:

This report focuses on the investigation for the differences in SRTM and the Photogrammetry technique applied in remote sensing.

SRTM was a mission that launched a space shuttle, and captured data from space. Photogrammetry is a technique where data is captured from photographs taken from an airplane.

In order to compare the two methods, their data had to be standardized by using a programmed called IDL, where the data from the Photogrammetry map was altered and placed into Matrix. Thus the software used for comparison, ENVI could understand both data sets and compare their properties via its functions such as geo-referencing, layer stacking and re-sampling.

The main method used for comparison was to compare heights of areas obtained from SRTM DEM and Photogrammetry DEM with a reference trig beacon. Thus deviations could be obtained from those simulations, and the technique that had a height that deviated more from the trig beacon would prove to be the less efficient method.

Results of the simulation proved that SRTM had acquired more accurate heights, as its height deviated less to the trig beacon when compared with the Photogrammetry height values. In most cases the SRTM over-estimated the height of regions, while Photogrammetry techniques under-estimated the height values. However, there was no large deviation between the two methods, and both methods still have their own advantages that cannot be traded off.

Back to Top
Back to Group Theses

 

13. Andrew Woods: An FX Software Correlator for the Karoo Array Telescope Project

Abstract:

This report describes the relevant electrical engineering issues involved with designing and implementing a software correlator, more specifically an FX correlator, in software.

A correlator is a hardware or software device that combines sampled voltage time series from one or more antennas to produce sets of complex visibilities. At all times it was kept in mind that this correlator must be designed with the knowledge that it needs to be implemented in hardware.

Certain radio astronomy and the DSP concepts required to construct a software correlator are discussed and the reasons for needing a correlated made clear.

The output of the correlator and its uses are briefly described. DSP concepts used to design and implement a working correlator are discussed. Basic digital filter concepts and characteristics are reviewed and the ideas behind polyphase filter decomposition are introduced. Other concepts such as analytic signal representation and cross-correlation are briefly touched on.

The requirements of the correlator are reviewed and each stage of the correlator and its operations presented. The sequential mathematical transformations are shown from real time voltage signal input, through to the complex visibilities output. These mathematical operations were simulated in python and the results presented. The python simulations not only aid in the understanding of the correlator’s operations. They were developed to test the mathematical operations that needed to be performed. These simulations were all run on a finite set of ideal input data, from only one source. There were no invalid inputs and the operations were not distributed over different computational devices. The problems created by less ideal input that the correlator receives in reality and the actions taken to produce the desired output is described.

The design of the KAT API provided for the software correlator and modifications to be done to it to fully support the less than ideal input are discussed. The channel abstraction from the KAT data frame is presented and its purpose described.

Various tests performed on sets of input data is shown. The results of this project are discussed and conclusions are drawn.

Back to Top
Back to Group Theses

 

 

Return to RRSG's Homepage

This page was last updated in September 2008 (RL)