A few interesting things about me. I love to read science fiction (my favorite is Frank Herbert's Dune). I am also an avid gamer. I love to play competitive strategy games and first-person shooters. Lastly, I love learning. Every day I push myself to learn something new, whether that be about machine learning, software engineering, or miscellaneous facts about the universe.
I recently started working as a data process manager at Capital One. My work mainly revolves around finding ways to automate business reports that have been entirely manual in the past. I have cut down the number of Graduate classes I am taking to accomadate a fulltime work schedule, but I plan to graduate from the University of Texas at Dallas in May with a Masters in Computer Science.
I maintain servers for database storage, model training, and model deployment.
I have worked with researchers to apply NLP techniques to make sense of the motivations behind human interactions.
Machine learning is more than an API call to scikit-learn. I love the math and theory as well as the implementation.
I regularly extract data from Hadoop databases using the HIVE framework.
I implement machine learning models in real world production systems using REST APIs.
I love telling a story. Making a beautiful and compelling presentation is one of my favorite skills.
Databases (SQL) - 5
Servers (Linux / Bash) - 4
Big Data (HIVE / Spark) - 3
Python - 5
Computer Vision (TensorFlow) - 4
NLP (Spacy / TensorFlow) - 4
Teaching / Presenting - 5
Statistical Methods - 3
Visualization (Tableau) - 2
Take a look at my recent work.
A helpful tutorial I wrote recently on how to set up a Bash script that utilized the AWS CLI to start, log into, then shutdown an EC2 instance. (I didn't want to forget the instance was running and lose money)
Tested the use of Word2Vec embeddings with a variety of sequential input deep learning models towards the task of language modeling (predicting the next word in a sentence).
A fully functional, SQL-compliant database implemented from scratch in Python. DavisBase compresses data to a custom-designed bit-level encoding for maximal data compression. By utilizing a file size of 512Kb, DavisBase performs well in low memory environments while also maximizing query time.