profile-images

Christiano C.

AI and Data Science , Information Technology , Operations

call   (214) GO-RELAY / (214) 467-3529

email   Careers@RelayHumanCloud.com

home   Ahmedabad, India


SKILLS

  • Azure AI Foundry
  • AWS
  • Github
  • PostgreSQL
  • Python
  • Machine Learning

  • Deep Learning
  • Natural Language Processing
  • Apache Kafka
  • Mage
  • Metabase
  • Numpy

  • Pandas
  • Scikit-Learn
  • HuggingFace Hub
  • LangChain
  • FAST API
  • Ollama

EDUCATION

Master of Technology in computer Science

Birla Vishvakarma Mahavidyalaya Engineering College, Gujarat (2023)

Bachelor of Engineering in Computer Science

Govt. Engineering College, Gujarat (2021)

LICENSES & CERTIFICATIONS

  • Azure Data Scientist Associate | Microsoft (DP 100)
  • Azure Fundamentals | Microsoft (AZ 900)
  • Machine Learning | Standford Online
  • Neural Networks and Deep Learning | Stanford Online

Extraverted Introverted Intuitive Observant Thinking Feeling Judging Prospecting

PROFESSIONAL SUMMARY

Experienced AI Engineer specializing in AI-driven automation and cloud-based solutions. Proven track record in optimizing processes, reducing manual effort, and enhancing efficiency by integrating LLMs and cloud technologies. Skilled in developing scalable applications for banking, training, and media localization using AWS and Azure, delivering significant time savings and consistent, high-quality results.

PROFESSIONAL EXPERIENCE

AI Engineer

Relay Human Cloud | India | Apr 2025 – Present

  • Designed and implemented AI-based solutions to streamline HR, compliance, and finance workflows, reducing manual effort and improving operational efficiency.
  • Built LLM-powered chatbots and document intelligence systems to enable faster information retrieval and intelligent process automation.
  • Trained and optimized AI models on domain-specific documents in collaboration with product and engineering teams to improve response accuracy and relevance.
  • Standardized and cleaned data from multiple external service providers to improve data quality, reliability, and accessibility across systems.
  • Developed scalable data ingestion and integration pipelines using Mage, consolidating raw data into a centralized master database in Supabase.
  • Integrated Apache Kafka for streaming and asynchronous ingestion to support event-driven processing and decoupled system architecture.
  • Implemented data validation, schema alignment, and deduplication logic to ensure consistency and integrity across datasets.
  • Automated IBV report generation workflows, significantly reducing turnaround time while improving accuracy and repeatability.
  • Built operational and analytics dashboards in Metabase to monitor pipeline health, data quality metrics, and business KPIs.
  • Monitored and maintained data pipelines, proactively resolving failures and edge cases to ensure high availability and system stability.

Jr. Data Scientist

Valens Datalabs Pvt Ltd | India | Sep 2023 - Mar 2025

  • Developed an LLM-powered risk profiling web application that automated and streamlined the manual profiling of entities involved in banking transactions, reducing processing time from 60 minutes to 15 seconds.
  • Leveraged Bing Advanced Search API, Mistral 7B-Instruct-v0.2, and Azure services (Virtual Machine, App Service) to enhance the web application’s functionality and efficiency.
  • Designed a RAG-based Q&A and learning system for new trainees in a food manufacturing plant, eliminating the need for constant one-on-one human training and significantly reducing training overhead.
  • Implemented context-aware responses with source references and timestamped video guides, optimizing the learning process for new employees.
  • Utilized GPT-4-32k, Text embedding-3-large, Azure AI Search, and Azure AI Video Indexer to build and deploy the learning system.
  • Developed a cultural-accurate English-to-Indonesian subtitle translation API, automating a multi-hour subtitle localization process and reducing translation time to seconds.
  • Engineered a two-stage Lambda workflow using AWS API Gateway and Lambda to increase scalability and consistency in media localization, improving the overall quality and speed.
  • Used AWS Bedrock (Claude Sonnet 3.5 v2), API Gateway, Lambda, and S3 to deploy the translation service.
  • Created a high-performance API for generating transcripts, summaries, sentiment analysis, diarization, PII masking, and politeness detection for customer care call audio files, processing 9000 hours of audio per day.
  • Applied Llama 3.1 8b, AWS g4dn.2xlarge instance, xlm-roberta-large-tydip, and pyannote/speakerdiarization-3.1 to build the processing pipeline.