Amsterdam, the Netherlands
d.a.milikina@gmail.com
+31 6 34128258
GitHub | DataCamp | HackerRank | Kaggle | LinkedIn
Programming
Python, C#
Data Visualisation and Manipulation
Python(Pandas, NumPy, PyTorch, Tensorflow, Matplotlib, Scikit-learn), SQL, Tableau, Microsoft Power BI
Machine Learning and AI
Supervised and Unsupervised Algorithms, Clustering &
Classification
Version Control
Git
Front-end development with React.js
JS, HTML5 and CSS3
Team and Product management
Trello, Azure DevOps, Jira
Bulgarian
English
Spanish
Otrium is a Netherlands-based fashion technology company that operates an online platform that sells fashion items.
âšī¸ About the company
The company focuses on creating a sustainable and circular fashion economy by partnering with brands to sell excess or unsold inventory through its platform.
Otrium allows brands to reach a larger audience for their products while also addressing the issue of overproduction in the fashion industry.
â
Role and key deliverables
GrwNxt is a startup Food Tech company that aims to create solutions for autonomous indoor farming.
â
Role and key deliverables
My job as a Consultant at DeepLearning.ai was to review the videos and test the notebooks from the specialisations before publishing them in Coursera. Additionally, I discussed necessary edits regarding the content with the corresponding teams. The mistakes and suggestions were added to Trello boards which were reviewed by the editors.
I had worked on the NLP track, the TensorFlow track and partially on the GAN's track.
My goal was to improve the notebooks, which consisted of expressing the requirements in an understandable way for the end-user and adding flowcharts or hints whenever needed.
Being an ambassador of Coding Girls enabled me to organise and facilitate tech-related events, be a speaker, and take part in important IT events. The events which I organised were for different age groups. Most of them were for women, but I also organised workshops for kids. Supporting more girls to follow their dreams was one of my main objectives. Companies that I worked with were Uber, Leanplum and MindHub, and I attended many events and conferences as an ambassador to represent the organisation and its values.
At DevriX I made plugin for WordPress, used WordPress admin panel for adding new info for sites. Updated some of the components which were in different projects and implemented new ones. Used Sass (Sassy CSS) and JS. During the intern, I accumulate basic knowledge of PHP which we were used for back-end for our plugin.
At JBoxers I improved my personal JS and React skills with different courses from egghead.io and Udemy. I did small projects to consolidate what I learned and also applied it in the company.
BSc Artificial Intelligence - Vrije Universiteit Amsterdam - GPA 7.6/10
I studied Spanish and Spanish culture in addition to math, science, geography, history and biology.
Through the specialization, I learned how to design applications that perform question-answering and sentiment analysis, summarise a text and build a chatbot.
In the first course, I performed sentiment analysis of tweets using logistic regression and Naive Bayes, used vector spaces to discover the relationship between words, and wrote a simple English-to-French translation algorithm. In the second course, I applied the Viterbi algorithm for part-of-speech tagging and wrote my own Word2Vec model, which uses a neural network to compute word embeddings. In the third course, I generated synthetic Shakespeare text using a GRU language model and trained a recurrent neural network to perform NER using LTSMs with linear layers.
Finally, in the fourth course, I learned how to translate whole sentences using the encoder-decoder attention model and built a Transformer to summarise a text.
Completing all four courses of the DeepLearning.AI TensorFlow Developer successfully enhanced my previous knowledge in TensorFlow. Through the courses, I explored more strategies to prevent overfitting, including augmentation and dropout. I applied transfer learning, extracted learned features from models, and built models with LSTMs, GRUs, and RNNs layers in TensorFlow. Trained LSTMs on existing text to generate poetry based on Shakespeare's poems. Last but not least, I explored how to prepare real-world data and use it to build a prediction model.
CertificatesThis specialization helped me build a solid foundation for my understanding of Machine and Deep Learning. It helped me to learn the best practices and gave me many insights from recognized and influential people in the AI field. It also prepared me for getting more into the depths of different applications and approaches towards Machine Learning.
By completing this career track, I gained more knowledge and experience with Python. There were many coding assessments which covered abundant aspects of Data manipulation, the Python library - pandas(manipulating and merging Data Frames), relational databases in SQL, Intro to Shell and Conda essentials.
CertificateThis skill track helped me familiarise myself with one of the most popular BI tools, namely - Tableau. I learned how to organize and analyze data, create presentation-ready visualizations, build insightful dashboards, and apply analytics to worksheets. Moreover, I used data connectors to combine and prepare datasets and combined multiple data tables with various relationships, joins and unions. Furthermore, I learned how to manage different data properties, like renaming data fields, assigning aliases, changing data types, etc.
CertificateThis career path successfully enriched my insight into Deep Learning. The track contained more intermediate courses on SQL and Python libraries such as Matplotlib and Seaborn, where I covered a wide variety of displaying the data(making subplots, overlaying plots, as well as strip, swarm and violin plots). Furthermore, there were two courses for Statistical Thinking in Python, where I cleared the previously acquired data to make an accurate conclusion for the tendencies.
CertificateFinishing the Data Scientist career path, I successfully upgraded my skills from the last two tracks(Data Analyst and Python Programmer) thanks to the project and the case study I solved.
Certificate
Completing the Data Engineer track, I deepened my expertise in building datasets from imported files with different file formats and Bash scripting.
Moreover, I learned how to process data in Shell and did command-line automation.
I integrated Amazon Web Services (AWS) into data workflow, uploading data to Boto Amazon Simple Storage Service (S3), creating buckets, and subscribing people to SNS to receive critical notifications via SMS.
Boto is an AWS software development kit for Python that enables creating, configuring and managing AWS services. Amazon Simple Notification Service helps manage services that deliver messages from producers to consumers.
Another topic was Big data analysis with Apache Spark. Spark helps you interface with RDDs.
PySparkSQL is a library to apply SQL analysis, while the MLlib machine learning library is a wrapper over PySpark.
Another discovered area was error handling, transactions in SQL Server, and building and optimising triggers in SQL Server.
This track gave me significant knowledge about Keras, a Python library capable of running on top of TensorFlow.
There were exercises about building Sequential models with Dense layers(different numbers of neurons and layers). Also, I used Keras for image processing and classifying clothing types. XGBoost is a fast and scalable gradient-boosting library.
I learned how to measure AUC(Area under the curve) and accuracy, import hyperparameters with a Pipeline and many other valuable usages.
Another important Machine Learning tool included in the path was Apache Spark, integrated with Python.
This project aims to investigate the influence of location and price on restaurant ratings, addressing the lack of research in this area. By utilizing advanced statistical techniques, the project seeks to fill this gap and provide valuable insights for the restaurant industry.
Methodology:
This project investigates strategies for managing imbalanced image datasets in Convolutional Neural Networks (CNNs) by comparing two techniques: oversampling and class weighting. The focus is on a dataset containing images of whales and dolphins, which exhibits a significant class imbalance.
Objectives:
A platform inspired by the BookCrossing initiative and the trends of a shared economy.
The main idea is to give people the possibility to connect with others that have a particular book in a given region and
exchange one book for another without actually putting any monetary value in the exchanged books.
This way, small communities can be created within cities/regions/counties or even within a company/school or university.
The biggest goal was to make the most major open library that enables books to travel worldwide.
For the front-end we used
React.js with the material design MDL Lite library for styling the interface.
For the back-end we used
ASP.NET Core with EF Core and SQL Server.
Demo website - The demo website currently has a lot of placeholder info and media until the actual "content" is poured in.
We created the website platform to show our development skills. The main idea was to find diverse and exciting projects. We were motivated and open to new challenges. That was the reason for creating the company which offered such types of services. It provided users to specify and customise their needs for a web platform.
Demo website - The demo website currently has a lot of placeholder info and media until the actual "content" is poured in.
We were motivated to take care of our plants and wanted to give them precisely calculated water and sunlight, without which they could not survive.
Some of the species are very demanding and difficult to grow. So we decided to make our custom garden project to monitor them.
We used Arduino to receive and display the values with different components.
For the database, we used Apache Cassandra; for displaying the charts, we decided to use React-Vis, and Python for data ingestion from Arduino through USB (Future versions through LoRaWAN), mapping the input and updating Apache Cassandra and GoLang - REST/GraphQL (not yet decided and implemented) from the Apache Cassandra.
I partially translated Visual Studio Code from English to Bulgarian.
I volunteered in Mega Dojo Sofia 2.0, part of the closing of the Bulgarian Presidency of the Council of the European Union.