AVP-Data Integration Engineering
Welcome to the Latest Job Vacancies Site 2025 and at this time we would like to inform you of the Latest Job Vacancies from the indosat with the position of AVP-Data Integration Engineering - indosat which was opened this.
If this job matches your qualifications, please send your application directly through our latest Job site. Indeed, every job is not easy to apply because it must meet several qualifications and requirements that we must meet in accordance with the standard criteria of the Company who are looking for potential candidates to work. Good job information AVP-Data Integration Engineering - indosat below matches your qualifications. Good Luck: D
...
AVP-Data Integration Engineering
Location: ID Level: Managerial Employment Status: Permanent Department: Group Digital Engineering and Transformation Description:Role Purpose
Role & Responsibilities: Data Warehouse and Business Intelligence Engineering To LEAD, OVERSEE, and GUIDE Data Integration, ETL, and Data Pipeline Engineering activities for end-to-end business solutions, ensuring high-performance, scalable, and reliable data movement across on-premise, cloud, and hybrid architectures using batch, API, Streaming, or microservices. This role plays a critical role in automating, optimizing, and modernizing data integration workflows while ensuring data quality, governance, and observability. Strategic Leadership & Governance- Enterprise Data Integration Strategy: Drive end-to-end data pipeline architecture across batch, real-time streaming, API-based, and cloud-native integrations.
- Multi-Cloud & Hybrid Data Architecture: Design scalable, flexible, and fault-tolerant data integration strategies spanning on-prem, Hadoop, and GCP (BigQuery, Cloud Storage, Pub/Sub, Dataflow, Dataproc).
- Vendor & Stakeholder Management: Collaborate with Data Engineers, BI Developers, Cloud Engineers, and Vendor Partners to ensure SLA compliance and optimal data flow management
- Hadoop Ecosystem Mastery: Deep expertise in HDFS, Hive, Spark, Impala, HBase, Kafka, Oozie, and Sqoop.
- Optimized Data Processing: Implement distributed computing models for massive-scale ETL & analytics workloads.
- Data Lake & Datalakehouse Optimization: Architect data ingestion pipelines for structured, semi-structured, and unstructured data into Delta Lake, Iceberg, or BigQuery.
- Microservices & API Integration: Develop high-performance API-based ETL solutions using REST, gRPC, GraphQL, and WebSockets for real-time data exchange.
- HBase & NoSQL API Integration: Enable low-latency API access to HBase, Cassandra, and DynamoDB for high-throughput operational analytics.
- Data Federation & Virtualization: Implement Federated Queries and Data Virtualization for seamless cross-platform data access.
- Enterprise Streaming Pipelines: Design & optimize Kafka, Flink, Spark Streaming, and Pub/Sub for real-time data ingestion and transformation.
- Event-Driven ETL Pipelines: Enable Change Data Capture (CDC) and event-based data processing for real-time decision-making.
- Kafka Integration: Develop high-throughput, scalable Kafka pipelines with Kafka Connect, Schema Registry, and KSQL.
- HBase Streaming: Leverage HBase + Kafka for low-latency, high-volume event ingestion & querying.
- BigQuery Optimization: Leverage partitioning, clustering, and materialized views for cost-effective and high-speed queries.
- ETL & Orchestration: Develop robust ETL/ELT pipelines using Cloud Data Fusion, Apache Beam, Dataflow, and Airflow.
- Hybrid Cloud & On-Prem Integration: Seamlessly integrate Hadoop-based Big Data systems with GCP, on-premises databases, and legacy BI tools.
- BI DevOps & Continuous Delivery: Implement CI/CD pipelines to accelerate BI feature releases, ETL deployments, and dashboard updates.
- Data Observability & Quality Monitoring: Ensure end-to-end monitoring of data pipelines, anomaly detection, and real-time alerting.
- AI/ML Integration for BI: Apply predictive analytics and AI-driven insights to enhance business intelligence and reporting.
- Bottleneck Identification & Resolution: Proactively identify and eliminate performance issues in Hadoop clusters, ETL pipelines, and BI reporting layers.
- Minimum Requirements
Qualification:
Minimum University Degree (S1), Preferable Study area in Information Technology, Computer, Electrical, Telecommunication, or Mathematics/Statistics.
Experience:
At least has 5 year experience full cycle process in Data Integration, Microservices, and Data warehouse,. Preferable has an experience in telecommunication industry. If has experience to manage team is advantage
Skills:
- Very good analytical thinking and problem solving for effective identification of business problems, understanding of stakeholders needs and assessment and formulation of the solution
- Very good communication:
- Very good communication in Indonesian and English.
- Very good skills in technical writing and reporting.
- Very good presentation and persuasion skill capabilities.
- Very good collaboration skills with many stakeholders
- Very good knowledge in Datawrehousing, Bigdata and BI architecture, technology, design, development and operation.
- Good knowledge of telecommunication business in general.
- Have experience and knowledge to process CDR from Telcos system e.g. of Charging and Billing, GGSN, MSC, VLR, SMSC, etc.
- Have experience min. 5 years to handle Data Integration project team and developer team
- Have experience work with near real time data, huge data volume and unstructured data processing.
- Familiar and hands on mostly with below technology stack:
- Programming: Phyton, Java, scala, Go, Shell script, SQL (PL/pgSQL, T-SQL, BigQuery SQL), or other relevant scripting.
- Data Pipeline Orchestration: Apache Airflow, Cloud Composer, NiFi, etc.
- Big Data & Streaming: Kafka, Flink, Spark Streaming, HBase, Hive, Impala, Presto
- Cloud Data Engineering: GCP (BigQuery, Dataflow, Dataproc, Pub/Sub, Cloud Storage).
- Monitoring & Observability: ELK Stack (Elasticsearch, Logstash, Kibana), Datadog, Prometheus, Grafana
- Microservices & API Integration: REST, gRPC, GraphQL, WebSockets, OpenTelemetry
- Data Governance & Quality: Great Expectations, dbt, Dataform, Monte Carlo
- BI DevOps & Automation: Terraform, Kubernetes, GitOps, Cloud Build
- Good knowledge in IT infrastructure in the areas of Server, Storage, Database, Backup System, Desktop, Local/Wide Area Network, Data Center and Disaster Recovery.
- Has a good knowledge in Agile Development (Scum), Business Process Framework (eTOM), Application Framework (TAM) and Information Framework (SID).
Information :
- Company : indosat
- Position : AVP-Data Integration Engineering
- Location : Indonesia
- Country : ID
How to Submit an Application:
After reading and knowing the criteria and minimum requirements for qualifications that have been explained from the AVP-Data Integration Engineering job info - indosat Indonesia above, thus jobseekers who feel they have not met the requirements including education, age, etc. and really feel interested in the latest job vacancies AVP-Data Integration Engineering job info - indosat Indonesia in 2025-03-12 above, should as soon as possible complete and compile a job application file such as a job application letter, CV or curriculum vitae, FC diploma and transcripts and other supplements as described above, in order to register and take part in the admission selection for new employees in the company referred to, sent via the Next Page link below.
Attention - In the recruitment process, legitimate companies never withdraw fees from candidates. If there are companies that attract interview fees, tests, ticket reservations, etc. it is better to avoid it because there are indications of fraud. If you see something suspicious please contact us: support@jobkos.com
Post Date : 2025-03-12 | Expired Date : 2025-04-11
Recomendations Jobs
-
2025-02-03
-
2025-02-02
-
2025-03-08
-
2025-03-08