(Description of general purpose of the job)
We are looking for a Database and Big Data Engineer who can find innovative and creative solutions to tough problems. As a Big Data Engineer, you’ll create and manage our data infrastructure and tools. You’ll evaluate the optimal design and architecture for different use cases of data stream, storage and processing, including to drive the implementation. This role will also be responsible for the integration between the data platform with different application frameworks and platforms within the organization’s Industry 4.0 digital solution ecosystem, as well as supporting exploration of new technology and innovation solution from data communication, storage and retrieval perspective.
Roles and Responsibilities (Essential roles, responsibilities an d activities a candidate can expect to assume in this position)
• Works with application development team, business unit process experts, and
outsource technology partner to design data stream, storage, retrieval and analysis
• Evaluate and implement Big Data tools and frameworks, software and hardware
required to provide relevant data platform capabilities
• Develop data integration solution for IT and OT technology develop by Innovation
• Evaluate and implement data ETL tools and establish development, test and
deployment process and governance
• Review solution performance, fine tune and advise necessary infrastructure
configuration update or upgrade
• Ensure compliance of data management, access control and usage with organization
data governance policy.
• Design and implement data model and integration development process.
• Support and provide technical advice for exploration of new technology, POC
development of new IT and OT solutions, data analytics, and AI/ML development.
Qualification and Education Requirements (Education and Work experience that a candidate should have when applying for position)
Minimum Education required (specific field or equivalent):
• Bachelor degree in a technical discipline, with emphasis on Computer Science/
• A passion for massive data, data protocols, data analytics
Minimum years of experience in role:
• Proficient understanding of distributed computing principles
• 2 – 5 years experiences Databases platforms (i.e. Relational Databases, Hadoop clusters,
HBase, Cassandra, MongoDB, Datastreams, apis)
• 2 – 5 years experiences with building stream-processing systems, using solutions such as
Storm or Spark-Streaming
• Experience with Big Data querying tools, such as Pig, Hive, and Impala
• Experience with Spark
• Experience with messaging systems, such as Kafka or RabbitMQ
• Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O
• Experience with Cloud Big Data platform
• Good understanding of Secure Software Development, secure code quality control, and
application and system integration vulnerability assessment.
• Good understanding with Application Development and Software Assurance in a highly
• Good understanding of smart factory analytics solution such as PTC smart factory
framework (ThingWorx, Kepware).
*Delete accordingly Page 3 of 3
• Good understanding of Lambda Architecture, along with its advantages and drawbacks
• Great individual performer as well as contributor in a team
• Fresh graduate with highly self- motivated personality are welcome to apply as well.
Preferred Skills (if any):
• Demonstrated Excellent level of analytical ability, communication and interpersonal
skills required to build relationships with team members to solve problems and resolve
• Experience with Big Data development tools
• Familiar with Software Development
• Familiar with deep learning and computer vision domains (object
classification/detection/segmentation, video analytics, text detection/OCR and etc.)
• Familiar with deep learning frameworks (e.g Tensorflow, PyTorch, Keras and etc.)
• Familiar with smart factory analytics solution such as PTC smart factory framework
JBRTX // Equal Opportunity Employer // 01527333