
Hello, I am Felipe Boseong Jeon
Thank you for visiting my website! I am a robotics engineer who loves building a
robotic system and exploring the latest advancements in the field of robotics.
As a dedicated researcher and a developer with PhD, I have covered a wide range of
areas from designing hardwares, object recognition, online spatial mapping, and motion
planning & controls.
Education
- Received BS in mechanical & aerospace engineering at Seoul National University (opens in a new tab) (FYI, best university of South Korea), 2013-2017
- Received PhD in Robotics at Lab for Autonomous Robotics Research (LARR (opens in a new tab), advisor: H. Jin Kim (opens in a new tab) ) , 2017-2022. ( 5 yrs graduation 🥷)
Publications

Online trajectory generation of a mav for chasing a moving target in 3d dense environments

Integrated Motion Planner for Real-time Aerial Videography with a Drone in a Dense Environment

Autonomous Aerial Dual-Target Following Among Obstacles
What Can I Bring ?
1. Real-time motion planning (major)
I majored in 3D real-time motion planning. I covered various problems including boundary-value-problem (BVP), minimal time goal reaching, safe trajectory planning against dynamic obstacles. As the next section says, I have solved the problems in very practical situations such as unstructured environments or noisy observation. Especially, I specialized in chasing (following) trajectory generation for dynamic targets.
Foundations
- Solid understanding and hands-on experience in (non holonomic) optimization-based motion planning such as LQR, CHOMP (opens in a new tab), TEB (opens in a new tab).
- Hierarchical planning methods (opens in a new tab) which can joint global path planners (search or sampling based) and local planners (optimal control or splines) to handle complex environments.
- Lattice based motion planning using offline motion library (opens in a new tab).
- Fast research skills to cater a specific scenario (cost shaping and constraint formulation).
2. Perception for localization, mapping and recognition
To realize the motion in a real-world target, I came to have the following domain knowledge:
Localization
In many cases, an indoor vicon room was too small for my experiment. I had to test and tune the following algorithms:
- ZEDfu (VIO algorithm in-house of ZED camera)
- VINS mono (or fusion)
- ORB slam
- (Lego-)Loam
Also, I had to use Kalibr (opens in a new tab) for extrinsic calibration. Although I did not self-developed the above algorithms, I have hands-on experience and sense about good-and-bad operating conditions, and parameter tunings in submodules such as keyframe generation, bundle adjustment, and binning & feature association of SLAM algorithms.
Volumetric mapping
In order to measure safety or visibility (or for just visualization), I am proficient at mapping frameworks which make (signed) distance field (ESDF) from point cloud. Sometimes, I have to make pointcloud on my own (opens in a new tab) from depth images using pin-hole model. I have many experiences on the below framework:
- In the simplest, costmap (opens in a new tab) of navigation stack
- Octomap and dynamicEDT (my fork (opens in a new tab))
- Voxblox
- 3D mesh generation & rendering (opens in a new tab) from open3d. I have an experience (opens in a new tab) using cuda in open3d.
Object perception
When I implemented my chasing modules, I had to detect and track the target visually (2d image) and predict its motion in 3d.
- I collaborated with my friend to test learning-based detection modules (opens in a new tab).
- I have hands-on experience (opens in a new tab) on comparing visual trackers.
- Code experiences with 3D bounding box detection and skeletons (opens in a new tab)
- Combining different inference outputs to track 3D position of objects (opens in a new tab).
- Yolo and pixel segmentation (opens in a new tab).
3. System (software & hardware) integration
While implementing the algorithms (planning, perception) into a real-world robot, I spent a lot of time making hardware (drones and mobile robots) and writing a manageable source code for releasing and maintaining.
Hardware integration
- I designed the whole hardware configuration including networks, sensors, actuation, and onboard computers. I considered a lot of factors such as required battery life, payload, and even budget.
- I have a lot of experiences using sensors and drivers. For example, as illustrated here, I have used vision sensors for my experiment ranging from mono camera, stereo camera, depth camera, to Lidar. Also, I am familiar with other types of sensors such as IMU, baroometer, and 1d distance sensor (normally with pixhawk). Of course, I am comfortable with filters such as KF, UKF, and EKF.
- I can design the whole parts (opens in a new tab) using CAD program such as Solidworks. I have senses of geometric tolerance for production. Also, I enjoy scheduling multiple production threads.
- Basic skills for wiring and soldering (opens in a new tab).
Software integration
On the other hand, I have researched what is the best practice to manage a large set of sources codes. For example, I had to manage threads & concurrency to combine various sensor outputs (in many cases, from ROS spinners or executors). I am quite sensitive (?) with the belows:
- Clean code management including design patterns (strategy pattern for algorithm comparison, factory pattern for extensibility)
- UI implementations using React (opens in a new tab) (web) and Qt (opens in a new tab).
- Cmake project packaging with a strong observation of adaptor pattern (e.g., core and ROS separation).
- Project management tools such as git, CI including git actions (opens in a new tab), and JIRA for TDD.
- Needless to say, expert level of ROS (opens in a new tab) and ROS2.
4. Testing and experiment management
I consider testing skills in a high regard, as one of the must-met qualifications.
Testing and simulation
- Proficiency in simulation frameworks such as gazebo and airsim (opens in a new tab) (w/ Unreal game engine).
- A strict practitioner (opens in a new tab) in gtesting. I am comfortable with designing a good test for TDD.
Field experiment
-
I am really used to get my hands dirty. At the same time, I always thoroughly design a sequence of hardware tests. For example, when building a real drone, I test the belows one by one:
- All the wiring works?
- Whether the rotational direction matches (opens in a new tab) the airframe preset?
- The speeds of each motor follow the control logic? (If RC gives a command forward flight, then the latter two motors should rotate faster (opens in a new tab))
- etc..
-
I plan how to record the experiment scene. For example, where to set cameras to record videos, or which data to log (e.g., ROS bag or flight review (opens in a new tab)).