Data and AI Solutions Engineer at Reignova Technologies

Job Role Insights

  • Date posted

    2026-04-16

  • Closing date

    2026-05-16

  • Hiring location

    Dar es Salaam

  • Career level

    Middle

  • Qualification

    Bachelor Degree

  • Experience

    3 - 5 Years

  • Gender

    both

  • Job ID

    130833

Job Description

Company Overview

Reignova Technologies is an Information Technology consultancy, headquartered in Dar es Salaam, Tanzania. We partner with commercial banks, telecom operators, government institutions, SMEs, and large enterprises to deliver transformative digital solutions, spanning AI & automation, data analytics, cloud & DevOps, cybersecurity, and managed IT services. At Reignova, data is not just a resource; it is the foundation upon which every intelligent solution is built. We don't just build technology, we build futures.

 Role Summary

As a Data & AI Solutions Engineer at Reignova, you are the professional who turns raw enterprise data into intelligent, decision-making systems that clients can act on immediately. You engineer the data pipelines that power AI and then build the AI solutions that extract value from that data. From architecting real-time data platforms for organizations, to deploying Generative AI applications that automate workflows, to building RAG systems that transform enterprise knowledge into competitive advantage, your work produces client solutions that are visible, measurable, and transformative.

 Key Responsibilities

Data Engineering & Pipeline Architecture

  • Design, build, and maintain scalable data pipelines and ETL/ELT workflows that ingest, transform, and deliver high-quality structured and unstructured data from diverse enterprise sources including databases, APIs, IoT streams, and third-party platforms
  • Architect enterprise data platforms using modern lakehouse and warehouse solutions (Azure Synapse Analytics, Databricks, Google BigQuery, AWS Redshift) to support both operational analytics and AI model training at scale
  • Implement real-time and batch data processing frameworks using Apache Spark, Kafka, Azure Data Factory, and equivalent tools, ensuring data is fresh, reliable, and available for both business intelligence and AI consumption
  • Design and enforce data governance, data quality, and master data management frameworks that ensure data assets meet the accuracy, completeness, and compliance requirements of regulated enterprise clients in banking, telecom, and government
  • Build and manage data catalogues, metadata management systems, and data lineage tracking to provide full visibility into how data flows from source to insight across client environments

AI, Generative AI & Automation Engineering

  • Design and build production-grade Generative AI applications using Azure OpenAI Service, OpenAI API, and Google Vertex AI that leverage structured and unstructured enterprise data to deliver intelligent, data-driven decision support for banking, telecom, and government clients
  • Architect Retrieval-Augmented Generation (RAG) systems that connect vector databases (Pinecone, Chroma, Azure AI Search) to curated enterprise data pipelines, enabling AI systems to provide accurate, context-aware responses grounded in real organizational data
  • Develop Python-based AI automation pipelines for data ingestion, preprocessing, LLM chaining, output parsing, and post-processing, ensuring end-to-end workflows are production-ready, maintainable, and observable across enterprise cloud infrastructure
  • Design and deploy intelligent automation workflows using n8n, Make (Integromat), and Azure Prompt Flow that integrate LLMs with enterprise data sources, CRMs, ERPs, and communication platforms to automate complex, data-intensive business processes
  • Build and deploy machine learning models for predictive analytics use cases including fraud detection, customer churn prediction, demand forecasting, and process optimization using data engineered specifically for model consumption
  • Engineer advanced prompt engineering frameworks, evaluation pipelines, and responsible AI guardrails for domain-specific LLM applications, ensuring AI outputs are accurate, auditable, and aligned with client compliance requirements
  • Stay at the leading edge of both data engineering and AI/LLM research, translating emerging capabilities (GPT-4o, Gemini, Llama, new vector DB architectures) into practical, data-grounded client solutions

Client Delivery & Collaboration

  • Collaborate with solution architects, business analysts, and client stakeholders to define data and AI use cases, scope data readiness requirements, and deliver measurable ROI from data and AI investments
  • Translate complex data architectures and AI capabilities into clear business value narratives for C-suite and technical stakeholders within enterprise and government client organizations
  • Support pre-sales engagements by contributing data and AI expertise to solution design, technical proposals, and proof-of-concept demonstrations for prospective clients

Required Qualifications & Experience

  • Minimum 3+ years of combined professional experience spanning both data engineering and AI/ML or Generative AI systems, with demonstrable production deployments in both disciplines
  • Strong data engineering proficiency: designing ETL/ELT pipelines, working with distributed data processing frameworks (Apache Spark, Kafka), and managing cloud data platforms (Azure Synapse, Databricks, BigQuery, or Redshift)
  • Proven hands-on experience with Azure OpenAI, OpenAI API, or Google Vertex AI — beyond experimentation, including production LLM deployments integrated with real enterprise data sources
  • Strong Python programming skills applied across both data engineering (pandas, PySpark, data transformation pipelines) and AI contexts (LLM API calls, prompt management, RAG pipelines, model inference)
  • Deep familiarity with LLM orchestration frameworks including LangChain, LlamaIndex, Semantic Kernel, or Azure Prompt Flow, with experience integrating these into data-rich enterprise environments
  • Solid understanding of vector databases, embedding models, and RAG architecture design, with experience connecting these systems to engineered enterprise data assets
  • Minimum 2+ years of hands-on experience building automation workflows using n8n and/or Make (Integromat), including multi-step workflows with conditional logic, error handling, and API integrations
  • Experience with data governance principles, data quality frameworks, and compliance requirements applicable to regulated industries such as banking, insurance, and government
  • Ability to communicate complex data and AI solutions to non-technical enterprise stakeholders, translating technical architecture into clear business value and ROI narratives

Minimum Education Requirement

  • Bachelor’s Degree - Required : Candidates must hold a minimum of a Bachelor’s degree in Computer Science, Artificial Intelligence, Data Science, Mathematics, Statistics, Software Engineering, Information Technology, or a closely related quantitative field from a recognized and accredited university or institution. Applications without a qualifying degree will not be considered.
  • Added Advantage: Master’s degree in Artificial Intelligence, Machine Learning, Data Engineering, Data Science, or Computational Linguistics. Open-source contributions in AI or data engineering are strong differentiators.

Required & Recommended Certifications

Candidates must hold at minimum one of the mandatory certifications listed below. The breadth of recommended certifications reflects the dual scope of this role. Both data engineering depth and AI engineering capability are assessed at interview.

  • Microsoft Azure AI Engineer Associate (AI-102)
  • Google Professional Machine Learning Engineer
  • Azure Data Scientist Associate (DP-100)
  • AWS Certified Machine Learning - Specialty
  • Google Professional Data Engineer
  • Microsoft Azure Data Engineer Associate (DP-203)
  • Microsoft Azure Fundamentals (AZ-900)
  • Python Institute PCAP - Certified Associate in Python

 Preferred Skills

  • Azure Prompt Flow & Semantic Kernel
  • LangChain / LlamaIndex / LlamaHub
  • Pinecone / Chroma / Weaviate / Qdrant (Vector DBs)
  • Fine-tuning techniques (LoRA / PEFT / QLoRA)
  • Multi-agent AI systems (AutoGen, CrewAI)
  • dbt (Data Build Tool) for data transformation
  • Apache Airflow for workflow orchestration
  • MLOps & CI/CD for AI (MLflow, Azure ML Pipelines)
  • Data visualization (Power BI, Looker, Tableau)
  • Low-code AI (Power Automate + AI Builder)
  • Real-time streaming analytics (Kafka, Azure Event Hubs)

 Candidate Profile

  • A rare professional who thinks in data pipelines and speaks in AI models, you understand that the quality of an AI system is only as good as the data that feeds it, and you are equally skilled at engineering both
  • Intellectually restless - you are obsessed with what becomes possible when clean, governed, real-time enterprise data meets the latest large language models and machine learning architectures
  • Builder mentality - you move from data architecture whiteboard to working AI prototype rapidly, then iterate relentlessly until the solution is production-ready and generating measurable client value
  • Comfortable operating in ambiguity: both data and AI use cases at enterprise clients are rarely fully defined at the start, and you thrive in shaping clarity from complexity
  • Strong communicator who can translate both data architecture decisions and AI output capabilities into business value narratives for C-suite executives, IT directors, and government stakeholders
  • Committed to responsible, ethical AI and data practices - you are acutely aware of the governance, privacy, and societal implications of AI and data systems in developing-economy contexts
  • Proactive learner who keeps pace with the rapidly evolving landscape of both modern data engineering tooling and generative AI advances, and brings new capabilities to client conversations before clients even know to ask

Application Instructions

  • Interested candidates should send their application via email to: [email protected]
  • In the subject line of your email, clearly state the position you are applying for. Example: Application for Data & AI Solutions Engineer
  • In the email, indicate your expected monthly gross salary.
  • Please attach the following documents to your email: Passport Size Photo, Curriculum Vitae (CV), Application Letter, Copies of Certificates
  • Applications missing any of the above attachments or information will not be considered.
  • Only shortlisted candidates will be contacted.

Interested in this job?

30 days left to apply

Share this opportunity

Help others find their dream job

Generate Cover Letter

Login as a candidate to generate a free AI cover letter for this job

Apply for this job

Your full application — including your cover letter, phone number, email, and CV — will be sent directly to the employer's inbox. You will also receive a confirmation email.

Cancel
Send message
Cancel