Portfolio

Indoor Navigation In Shopping Mall

Scene Reconstruction, 3D Model, Pose Estimation

  • A certain company requested a map from 360 degree images in able to enhance their service indoors.
  • No Ground Truth was supplied.
  • When received, I noticed the image middle had blur parts and inferred that they were taken with two connected (180 degree each) cameras on a helmet.
  • The spatial area between each image seemed high, like 10 meters apart, and images had hardly any overlap since the person that took them was probably walking diagonal or curved lines.
The approach I took was the following: 
  • I searched the web for a map of the mall, I found a coarse map outline that the mall itself published on it’s websites and observed it.
  • I printed it and pin pointed the landmark stores on the images that were start and end.
  • I then processed the images taken by dividing them into 6 separate 60 degrees images.
  • I computed the fisheye radial distortion, provided the rectifying transformation matrix to straighten the images.
  • I then used the rectified images together with the images I found as start and end and inserted them to a SFM framework called COLMAP which enables configuring (feature detection and extraction methods, matching and geometric verification methods).
  • I configured it with an NN and Orb as feature extractor and matching and produced a Structure from motion reconstruction of a point cloud,  and then using Multi-View Stereo (MVS)  computed the depth.
  • Finally I was able to create a relative map for each camera position, and was able to relocalize on test images I had prepared forehand.

 

 

Indoor Navigation in Building

Mapping, 3D Model, Pose Estimation

  • A certain company requested an indoor navigation to enhance their service indoors.
  • The project was meant to be used on certain mobile device.
The approach I took was the following: 
  • Mapping Stage:
    • certain mobile device sends the camera location(in certain mobile device World Frame Coordinates) and relevant images.
    • These are stored on server. Special coordinates (entrances/doors/offices/elevators) are given metadata using manual insertion. 
    • I created PointClouds by using PCL triangulation.
  • Localization Stage:
    • Person downloads the App from certain mobile device store, and downloads through it a saved building session.
    • Person picks destination from list and TSP is used to create shortest route.
    • Person starts walking in the building and app sends images to server, NN is used to match similar pictures from along the way, PNP and Ransac are used to estimate best location and return it in certain mobile device world frame coordinates.
    • Process repeats until destination is achieved within certain distance.

 

 

 Race Track Localization

Scene Detection, Pose Estimation, Localization

  • A company requested finding a cars position on a race track to enhance position estimation from images given with Ground Truth positions.
  • The cars speed was high and thus some of the images seemed very smeared.
The approach I took was the following: 
  • I found a course outline of the map of the race track online.
  • I used perspective transforms to create a 2D image from the horizon lines which I knew were parallel.
  • I used erosion techniques to sharpen the white lines. 
  • Then I used different color spaces and masking (of apexes and trapezoids) and thresholding techniques such as Hough transform and HOG to extract  features of the lane.
  • I had to distinguish false positives by comparing to next/prev frames and discarded from saving them.
  • I had to connect and fit lines when there weren’t enough points.
  • Finally I could create a path which resembled the map I found. 
  • Then I had to localize the car on track which I could sometimes. 

I Took another approach with ORBSLAM2 which was a framework for simulations localization and mapping using ORBs

  • I used the same image preprocessing from before except on the extraction I switched to ORBSLAM2 and then configured it in C++ to save a map.
  • I then used the saved map for re localizing and again it was only possible sometimes.

 

 

Planting Commercials on Buildings

Scene Recognition, Object Recognition, Tracking, Deep Learning

  • A company requested for localization in 20 cm accuracy and 6 DOF to enable drivers to see commercials on specific buildings while driving the city roads.
  • The problem was that only GPS was provided, and it’s accuracy was between 3 to 10 meters and without heading data.

The approach I took was the following: 
  • I Trained a recognition network to recognize the specific buildings(which were annotated previously), and projected a homography of their annotated contours(polygons) to a smaller bounded inner rectangular bbox for add placement.
  • I Trained a segmentation with a multi-label classifier network based on another NN to feed the recognition network for faster results.
  • When the car was in eye sight of the building the network predicted high probabilities and a bbox would be solved using traditional PNP & RANSAC in the region with the highest probability.
  • The next part was selecting certain points in the bbox and tracking them using Lucas-Kanade Optical Flow calculation, to readjust the bbox in the continuous frames.

 

 

Navigation & Motion Estimation for Outdoor Navigation

Detection, Deep Learning

  • A company requested to navigate through city roads using VPS (Visual Positioning System).
The approach I took was the following: 
  • I applied a scene recognition NN which took in a query picture and outputted it’s feature representation.
  • I compared the prediction to previous predicted pictures that were stored on the database by two steps:
    • By first filtering  by a course geosearch(with GPS Coordinates) radius as a course filter
    • By computing KNN using certain similarity techniques between the feature vectors.
  • I Used acceleration and gyroscope to rectify(combine) motion estimation calculations and discard false positives.
  • I used a map to visualize the drivers motion path in realtime according to the closest images which weren’t false positives.

Algorithms:

  • Utilized algorithms such as, Optical Flow, PCA, Lucas-Kanade, Mean shift, Kalman filter, SVM, KNN, K-Means, Naive Bayes and many more.
 

I Have worked on many many more projects in various fields:

Web Full Stack Development
  • Programmed in C#/JS and TS and python to create full-stack web development projects. (Server and Client).
  • Clientele – The projects were designed for insurance companies, hospitals, government institutes and for the banks in Israel. According to all the limitations they held.
  • Frameworks used: ASP.Net(WCF / WEB API), Angular,  React, Node Js

Operating Systems and Network – Utilized all the following platforms for work: Linux (Apache)/ Mac OS (Apache) / (Windows – IIS/Active Directory)

ML Deep Learning (Pytorch, Tensorflow, Keras)

Computer Vision Recognition, Motion Analysis (Python, Pandas, NumPy, PIL, OpenCV, SK-Image, PCL, C++)

NLP Morphological, Synthetical, Lexical (Nltk, Textblob, Spacy, SK-Learn, LSTM, Shapely)

Project Management and Version Control – Jira, Git, Bitbucket, TFS, TargetProcess, Monday

Databse – Applied all the following DB’s: MS-Sql, Oracle, MySql, MongoDB,Postgres 

Security – Configured Windows PowerShell, Windows Firewall

Cloud –  Utilized AZURE/AWS/Google Cloud

Visualizations (Tableau/Spotfire)

CI/CD (Jenkins, Ansible, Terraform)

Data Analysis and Engineering (EDA, Data Modeling, ETL Pipelines) 

Desktop – Created WPF/WinForms/Tkinter applications