logo

View all jobs

Data Platform Engineer

New York, NY · Information Technology
Our client, a major bank in New York City, is looking for Data Platform Engineer.   
Permanent position with competitive compensation package (base range is 120-150K), excellent benefits, and target bonus.


Must be 2/3 days per week in New York City Office.

Data Platform Engineer

Looking for a highly skilled Kafka Platform Engineer to design, build, and operate our enterprise event-streaming platform using Red Hat AMQ Streams (Kafka on OpenShift). In this role, you will be responsible for ensuring a reliable, scalable, secure, and developer-friendly streaming ecosystem.
You will work closely with application teams to define and implement event-driven integration patterns, and you will leverage GitLab and Argo CD to automate platform delivery and configuration.

This position requires a strong blend of platform engineering, DevOps practices, Kafka cluster expertise, and architectural understanding of integration/streaming patterns.

Qualifications:
  • Bachelor’s degree in computer science, Engineering, or a related field.
  • Proven experience with Kafka administration and management.
  • Strong knowledge of OpenShift and container orchestration.
  • Proficiency in scripting languages such as Python or Bash.
  • Experience with monitoring and logging tools (e.g., Splunk, Prometheus, Grafana).
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and collaboration skills.
 
Preferred Qualifications
  • Experience with Red Hat OpenShift administration.
  • Knowledge of service mesh patterns (Istio, OpenShift Service Mesh).
  • Familiarity with stream processing frameworks (Kafka Streams, ksqlDB, Flink).
  • Experience using observability stacks (Prometheus, Grafana).
  • Background working in regulated or enterprise-scale environments.
  • Knowledge of DevOps practices and tools (e.g., ArgoCD, Ansible, Terraform).
  • Knowledge of SRE Monitoring and logging tools (e.g., Splunk, Prometheus, Grafana).

Job Description:

Kafka & AMQ Streams Engineering
·         Design, deploy, and operate AMQ Streams (Kafka) clusters on Red Hat OpenShift.
·         Configure and manage Kafka components including brokers, Kraft, MirrorMaker 2,
·         Explore Kafka Connect, and Schema Registry concepts and implementations.
·         Ensure performance, reliability, scalability, and high availability of the Kafka platform.
·         Implement cluster monitoring, logging, and alerting using enterprise observability tools.
·         Manage capacity planning, partition strategies, retention policies, and performance tuning.
 
Integration Patterns & Architecture
·         Define and document standardized event-driven integration patterns, including:
o    Event sourcing
o    CQRS
o    Pub/sub messaging
o    Change data capture
o    Stream processing & enrichment
o    Request-reply over Kafka
·         Guide application teams on using appropriate patterns that align with enterprise architecture.
·         Establish best practices for schema design, topic governance, data contracts, and message lifecycle management.
 
Security & Governance
·         Implement enterprise-grade security for Kafka, including RBAC, TLS, ACLs, and authentication/authorization integration. (SSO and OAuth)
·         Maintain governance for topic creation, schema evolution, retention policies, and naming standards.
·         Ensure adherence to compliance, auditing, and data protection requirements (Encryption at Rest and flight).
 
Collaboration & Support
·         Provide platform guidance and troubleshooting expertise to development and integration teams.
·         Partner with architects, SREs, and developers to drive adoption of event-driven architectures.
·         Create documentation, runbooks, and internal knowledge-sharing materials.
 
CI/CD & GitOps Automation
·         Build and maintain GitOps workflows using Argo CD for declarative deployment of Kafka resources and platform configurations.
·         Develop CI/CD pipelines in GitLab, enabling automated builds, infrastructure updates, and configuration promotion across environments.
·         Maintain Infrastructure-as-Code (IaC) repositories and templates for Kafka resources



Please email your resume or use this link to apply directly:
https://brainsworkgroup.catsone.com/careers/index.php?m=portal&a=details&jobOrderID=16757380
Or email: igork@brainsworkgroup.com
Check ALL our Jobshttp://brainsworkgroup.catsone.com/careers















Keywords: kafka openshift python bash splunk grafana Prometheus monitoring logging Redhat ksqlDB Flink DevOps ansible terraform ArgoCD

 

Share This Job

Powered by