Shradhanjali Pradhan

Logo

Data visionary on a mission to transform lives through AI-driven innovation.

View My GitHub Profile

About Me

I am a passionate AI/ML professional with 2+ years of experience in scalable AI solutions, data engineering, and MLOps. I specialize in building intelligent systems that drive business impact and innovation. Committed to democratizing AI, I focus on ethical AI development, deep learning, and cloud-based ML solutions, continuously optimizing models for real-world applications. πŸš€

Education

Professional Experience

AI/ML Engineer at Steradian, Bhubaneswar

(June 2022- July 2023)

Research and Development Engineer at Innotrat, Remote

(November 2022- August 2023)

Data Analyst Intern at Remotecare, Chennai

(August 2021- June 2022)

Applied Machine Learning Student Intern at Applied AI, Remote

(July 2021- August 2022)

πŸš€ Skills

πŸ› οΈ Programming Languages & Tools

πŸ“Š Data Engineering

πŸ€– Machine Learning & AI

πŸ“ˆ Data Analysis & Visualization

βš™οΈ MLOps & DevOps

πŸ† Soft Skills

Below are some of the projects I’m most proud of. Each showcases my technical skills and commitment to solving complex problems and delivering real value through data-driven innovation.

πŸ“Œ AWS Serverless Data Pipeline

πŸš€ Project Overview

This project demonstrates a serverless data pipeline using AWS S3, SNS, SQS, Lambda, Glue, and Athena to automate cross-region data migration and querying. The pipeline transfers data from an S3 bucket in one region to another, automates schema detection with Glue, and enables querying with Athena.

βœ… Key Learning Outcomes:

πŸ”§ Project Steps

  1. Create S3 Buckets – Set up source and target buckets for data transfer.
  2. Configure SNS & SQS – Set up notifications and event-driven triggers.
  3. Develop Lambda Function – Automate data transfer and trigger Glue Crawler.
  4. Run Glue Crawler – Detect schema and register data in the Glue Catalog.
  5. Query Data in Athena – Use SQL queries to analyze the transferred data.

This project provides hands-on experience in cloud automation, serverless workflows, and scalable data processing. πŸš€

πŸ“Œ AWS Glue & Snowflake ETL Pipeline

πŸš€ Project Overview

This project builds a scalable ETL pipeline using AWS Glue, S3, dbt, and Snowflake to extract data from an external API, store it in S3, and automate transformations with dbt before loading into Snowflake.

βœ… Key Learning Outcomes:

πŸ”§ Project Steps

  1. Set Up IAM Roles – Grant Glue and Snowflake access permissions.
  2. Extract & Store Data – Use AWS Glue to pull API data into S3.
  3. Integrate Snowflake & AWS – Configure secure data access.
  4. Transform Data with dbt – Build raw, transform, and mart models.
  5. Deploy dbt Environment – Automate transformations for scalable workflows.

This project provides real-world experience in data extraction, ETL automation, and cloud-based data warehousing. πŸš€

LLM-Powered Wikipedia Chat Assistant with RAG

Object Detection Using YOLOV8

Chatbot with OpenAI GPT-3 using Flask

Automatic Speech Recognition With TensorFlow

Conversational Assistant Using Rasa

Build a Road Sign Recognition System with CNN

The Learning Agency Lab - PII Data Detection

Additional Information

If you’re interested in the more detailed aspects of my work or want to see my code in action, check out the repositories below. Each repository includes extensive documentation on how the projects were built and how they can be run.

I’m also open to feedback on my projects, so don’t hesitate to raise an issue or submit a pull request!

Thank you for taking the time to explore my work. Let’s connect and make something great together!

How to Reach Me

I am always interested in hearing about new opportunities, and collaborations, or just chatting about technology and AI. Feel free to reach out to me through the following channels: