Jalen (Jae-eun) Yang

Graphics SW Engineer

Email: yangtkboy@gmail.com

Phone: +1-437-214-1185

Blog: jaeeun.github.io

About Me

Objective : Seeking a Position as a Software Development Engineer in an organization where I can utilize my existing skills and knowledge with 12 years of work experience in Graphics and Android, and where I can enhance and develop new skills to contribute in the accomplishment of organizational goals.


Language : Korean(Native), English(Intermediate)


Skills :

  • Graphics : OpenGL Rendering, Avatar Animations, Unreal Engine4, Unity (partial experience : Shader, Native Library)
  • Android: Java, native library(C++), RestAPI, GraphQL, Skia animation UI, Profiling (Adreno, Mali, Google Debugger), Strong Debugging and Optimizing applications or systems.
  • ML (Basic level) : Python, Tensorflow, PyTorch
  • ETC : FFmpeg, Git, SVN, Perforce, Docker, AWS, Jira, Confluence

Experience

Samsung Electronics Co., Ltd.

https://www.samsung.com

Staff SW Engineer

February 2011 - present

IT & Mobile Communications, Suwon
   Android Developer (5 years)
   Graphics Engineer (3 years)

R&D Center (Samsung Research), Seoul
   Simulations Developer (3 years)

Creative Challenge Lab, Seoul
   Project Leader (1 year)

Projects

Fingeraction Project

(Project Leader)

Feb.2022 - present

Participated in the in-house venture program as a project leader by organizing ideas and hiring team members.
Started the project called ‘Fingeraction’, which recognizes finger gesture and makes Avatar animation automatically.

Digital Human

(Simulation)

Feb.2021 - Jan.2022

Attended CES 2021 in Las Vegas as a developer of the project called ‘Home Interactive Avatar’, which connects the Digital Human with the robots and home networks.
Researched and authored the patent about the “Voice to Speech Animation” with Machine Learning and BlendShapes of ARKit.

Robot Simulation

(Simulation)

May.2019 - Jan.2021

Developed the robot simulation including physics, sensors, motions with ROS by using Unreal engine4.

AREmoji App.

(Graphics)

Feb.2018 - Apr.2019

Developed and managed the in-house renderer for the GLTF model with OpenGL (Model Load, PBS rendering)
Developed the BodyTracking function with the Rigid Body Animations by using BVH data.
Developed the plugin of Unity for 3rd party Game apps.

Samsung VR App.

(Graphics)

2017.Feb - Jan.2018

Developed Samsung VR App based RESTful api and GraphQL.
Developed 4K Streaming (Added the ‘Stitching’ solution into the FFmpeg filter in the AWS docker.)
Developed the ‘Stabilization’ solution.

VR Rendering Solution

(Graphics)

May.2016 - Jan.2017

Developed the Foveated Rendering for mobiles.
- Ported the Eye tracking solution of SMI.
- Implemented the 2 layer by using stencil to focus on the gazing point.
- Added the shader(GLSL,HLSL) to blend with each layers.

Participated in the project called ‘Octahedron projection’ which makes the efficient VR rendering solution for Samsung VR
- Implemented the shader (GLSL) to blend each face of Octahedron.

OnCircle App.

(Android)

May.2015 - Apr.2016

Circle UI communicator for Galaxy S6 (Edge type).
- Developed various type of messages (AGIF, Emoticon, Touch and Drag), animation UI

Kids Mode in Galaxy

(Android)

May.2014 ~ Apr.2015

Managed life cycle of the native applications for kids in Galaxy Store.
Developed animated UI of Parental Control, Kids Camera, Kids Voice Recorder and etc.

ETC apps

(Android)

Feb.2011 ~ Apr.2013

Managed 3rd party apps, Maintain releases of Tablet products, Quick panel, Video player

Education

Korean Aerospce University

Bachelor degree

2003 - 2011

Bachelor degree of Computer Science

Patents

US Patent 11,244,422

Image processing device and image processing method the thereof

US Patent 10,331,208

Image output method anbd electonic device for supporting same

US Patent 10,650,596

Electronic device and method of providing VR image based on polyhedron

US Patent 11,189,071

Electronic device for providing avatar animation and method related to same

(Proceeding)

How to make BlendShapes Animation with User’s situation adaptive face expression and inferred Lib sync data from the voice