Reasearch Projects

Below, the research projects I have worked on are listed. They are mostly in the areas of machine and deep learning, computer vision, and evolutionary computation, but they have one thing in common: they are all about creating intelligent systems!

Convolutional Stacking Network

This project is an effort to experiment with other deep networks that are not a neural network. The convolutional stacking network functions similar to a convolutional neural network (CNN) with the difference that the layers are not just made of neural network and all the layers are not trained in repetitions of holistic backward propagation of errors (backpropagation). Instead, feature extractors are stacked and are trained layer by layer from the input toward the later layers. The convolution operation, however, is similar to the conventional CNNs, with the exception that other types of feature extractors, such as independent component analysis (ICA), are used instead. The convolutional network is followed by a classifier or a regressor as is common.


NashNet is a supervised deep neural network that is trained with normal-form stage games and their Nash equilibria to predict the equilibria in new games. The network can be trained for a specific game shape. The shape of the game, however, can be symmetric or asymmetric. The games can also have any number of players. The number of generated equilibria can be one or any desired number based on the settings in the training time. For this, the loss function is following a max-min strategy and the architecture has multiple heads. Because there can be fewer number of Nash equilibria for a game than the generated ones, to combine possibly redundant predictions, the predictions of the neural network are clustered through the DBSCAN method.

Active Robotic Vision

In this project, an active vision system is used on a PR2 humanoid robot to better detect objects. Object detection can face issues in real-world tasks if the object is partially occluded or the object viewpoint is not good enough for the vision system. This project aims at resolving those situations by dynamically identifying them to incorporate a second camera to improve the detection performance.

The robotic vision system works on the PR2's ROS robotic platform. It uses two cameras: a Kinect 3D camera installed on the robot's head and an RGB camera in the robot's forearm. It first tries to detect objects viewed by the head camera and measures the confidence of the detections. If there is any uncertain detection, the vision system plans the movement of the forearm frame and with that the secondary camera to get another viewpoint of the object with the uncertain detection. After moving the secondary RGB camera close to the object, another round of object detection is done, this time in the viewpoint of the secondary camera. Later, the detections of the two camera views are matched and fused.

Dempster-Shafer Decision Fusion

To fuse the probabilities of the two classifiers in the above system, a decision fusion technique, based on Dempster-Shafer evidence theory is also developed. The fusion technique considers the belief of the classifiers about their uncertainties to weight their own and the other classifier's detections.

Dedicated Camera Pose Planner for Eye-in-Hand Robotic Platforms

As a part of the active vision project, there is a need to plan the pose of the secondary camera and then move the camera to that pose. Since there is no special-purpose robotic joint planner for active vision that considers the limitations of the camera's field of view and arm joints of the robot, we worked on a new robotic planner that, given the desired viewing orientation and distance to objects, plans the arm joint values.

Next Best View Planner for Active Robotic Vision

To decide where to look at the object from a new viewpoint, a next best view pose planner is developed that considers both the surface geometry of the objects and the classification uncertainty at different directions by dividing the detection bounding box to smaller tiles and compare them together. This approach enables an instant decision making for active object detection systems.

Nature-Inspired and Evolutionary Image Enhancement

This project has a goal to enhance the contrast of gray-scale images using nature-inspired and evolutionary methods. They are ant colony optimization, genetic algorithm, and simulated annealing, which generate a global transfer function to convert input images to higher contrast ones, while trying to keep the natural look of the images.

The method works by placing a few artificial agents (aka artificial ants) in a search space to generate a transfer function useful for converting any image to a higher contrast one. The ants start from the origin of the transfer function (bottom left) and move to the end point. After reaching the end point, a transfer function is created and its fitness is evaluated. Based on how good is a transfer function, pheromones are deposited on the path the ants have travelled. Pheromone on a point increases the chance of an ant in the next iteration to choose passing over it when nearby.

Each artificial ant has a genetic code during the process. The population of ants evolves via genetic algorithm. This changes the characteristics of the ants and their preferences in traversing their path in the search space. After selecting the best transfer functions, the simulated annealing tries to fine-tune them in an artificial annealing process.

Genetic Algorithm Processor (GAP)

The goal in this project is to reduce the computation time of genetic algorithm by speeding up its operations via parallel and pipelined processes on a dedicated hardware. To do this, a transistor-level digital CMOS implementation of genetic algorithm in 0.18 μm process with more than 478000 MOS transistor elements is proposed.

The genetic algorithm processor (GAP) is general-purpose, i.e. is not bounded to a specific application. Utilizing speed boosting techniques, such as pipelining, parallel coarse-grained processing, parallel fitness computation, parallel selection of parents, dual-population scheme, and support for pipelined fitness computation, the GAP significantly reduces the processing time. Further, through its built-in support of discarding infeasible solutions it may be used in constrained problems. A large search space is achievable by bit string length extension of chromosomes by connecting the 32-bit GAPs. In addition, the GAP supports parallel processing, in which the genetic algorithm’s procedure can be run on several connected processors simultaneously.

In the tests, the GAP has a speedup of 5391x over the software counterpart written in C language on a 2x2.5 GHz CPU. It is also at least 55 times faster than any other genetic algorithm processor in the literature. The speedups are obtained while the results are closely identical to serial genetic algorithm on software.

Dual-Population Genetic Algorithm

Dual-Population Genetic Algorithm is a variation of steady-state genetic algorithm, suitable for pipeline processing on hardware implementations of genetic algorithm. An opportunity in a high speed parallel processing of genetic algorithm (GA) is the ability to pipeline its serial operations, such as selection, reproduction, fitness calculation, and replacement. In the context of steady-state genetic algorithm, selection and replacement take place every iteration. Since they will require access to memory at the same time in a pipeline mode, there is a natural difficulty in parallelizing selection and replacement in steady-state GA. The general goal of the dual-population scheme is to make it possible to perform selection and replacement simultaneously to increase computation speed. It is done by defining two populations with one used for selection and the other for replacement in any iteration, whereas they switch their roles in the next iteration.

In a hardware setting, dual-population GA speeds up genetic operations 34% on average compared to the conventional steady-state GA, while showing similar optimization characteristics.

Genetic Algorithm Processor:

Dual-Population Genetic Algorithm:

Evolutionary Block-Based Neural Network

In this project a block-based neural network is implemented, which is trained via genetic algorithm. Besides the weights, direction of the connections in the block-based neural network can also be trained.

To train the network, genetic algorithm creates a population of network parameters (connection directions and weights as well as biases) and then tries to improve the network over iterations. The crossover of two networks can be in two shapes: By exchanging some of the corresponding connections of the two networks, or by stochastically balancing each two selected corresponding weights through a randomized weighted sum. The mutation operation can also change a connection weight randomly or flip the direction of a connection.

Melanocyte Detection in H&E Images

Successfully detecting melanocyte cells in the skin epidermis has great significance in skin histopathology. Because of the existence of cells with similar appearance to melanocytes in hematoxylin & eosin (H&E) images of the epidermis, detecting melanocytes becomes a challenging task. In our work, a novel threshold-based technique is applied to distinguish the candidate melanocytes’ nuclei. Similarly, the method is applied to find the candidate surrounding halos of the melanocytes. The candidate nuclei are associated with their surrounding halos using the suggested logical and statistical inferences. Finally, a fuzzy inference system is proposed, based on the HSI color information of a typical melanocyte in the epidermis, to calculate the similarity ratio of each candidate cell to a melanocyte.

Center of Gravity Defuzzifier

Discrete center of gravity defuzzifier is implemented on an analog circuit in CMOS 0.35 μm process in this project. Defuzzifiers are used as the last stage in a fuzzy system to translate the fuzzy logic data back to normal 'crisp' information. To implement this structure, instead of using the traditional multiplier-divider style, transconductance amplifiers as a multiplier with voltage-input current-output are used by exploiting the voltage follower aggregation principle to increase speed and reduce chip area.

Neural Network as a Weighted Order Statistic Filter

A weighted order statistic filter that uses a specific two-layer recurrent neural network is implemented on an analog circuit in CMOS 0.35 μm process. Weighted order statistic filters can select k-th largest value of a statistical sample with sample points being repeated based on their weight. Maximum, minimum, and median are special cases in their operation. One of their main applications is in signal processing and specifically in noise removal.

Google Scholar