Joining WibiData was the happy conclusion to an unlikely series of events. It all began at the start of my amateur programming career, when I discovered video games. In fact, I was never really any good at video games, but I learned to appreciate the underlying technology. From video games, I started dabbling on the web, writing crawlers and bots for cult classic games (Tron, anyone?), and became a hard-core Linux-proselyte at my conservative suburban Ohio high school (“Slackware or gtfo”). At the University of Michigan and later at the Ohio State University, I ingratiated myself with the scientific research community and became enamored with computation as a tool for discovery and invention. I journeyed through the esoteric reaches of mathematics and computability theory, collapsed wavefunctions, explored human ancestry through computational phylogenetics, and even predicted Supreme Court decisions using support vector machines. After college, unsure of where (or perhaps whether) to continue my studies, I settled down in Texas as an analyst in the financial services industry. There too, I encountered computation in the form of predictive modeling of credit risk. Through my experiences, I learned that data is abundant and that we need smarter computational tools to make sense of Big Data. For example, it is not unusual for companies to store millions of customers’ actions every second on the web. Consider genetics, where whole genomes require gigabytes per individual. Such data can grow so quickly that storing the raw data alone takes up petabytes. Without the proper tools, that data may then stagnate from disuse. Alternatively, data can be cleverly parsed to extract salient information. The need for computation is also present in classic personalized online product recommendations. Companies like Netflix and Amazon serve content to customers, but the only interaction between the customers and the providers are very brief sets of page-views and occasional reviews. Using machine learning techniques, providers can infer additional products that the user might be interested in, thereby increasing the probability of purchase or lengthening engagement. With common techniques, by the time a machine completes the analysis and recommendation, the opportunity to capture the user’s attention has long passed. If this sort of analysis can be done rapidly, the end-user’s experience may be augmented with better personalization in social, finance, sales, health, and other domains. But how do you work with terabytes of data in a reasonable amount of time? How do you even efficiently store and query such data? Traditional relational models for storing data are too cumbersome at this scale. And without the severe restriction of sample sizes, even analysis on traditional workstations becomes unfeasible. In my quest for modern techniques to work with Big Data, I again started interacting with the academic world. In early 2012, I enrolled in Prof. Jure Leskovec’s “Mining Large Data Sets” as a remote student. In the class, we covered some very deep and interesting topics including: map-reduce, hashing methods, association rule mining, collaborative filtering, dimensionality reduction and data stream analysis. In that class, I met and befriended Renuka, who had recently joined WibiData. She recognized that I had a passion for technology and an entrepreneurial personality, so even though I had no prior product development experience, she encouraged me to apply to WibiData. During the interview process I got to know Garrett, Kiyan, and the rest of the team. When I visited WibiData in late 2012, I was greatly impressed by the intellectual quality of everyone at here. As WibiData was transitioning from its startup-phase into a rapidly growing organization, I knew that this would be an amazing learning opportunity for me. I packed my bags, got on Route 66, and made the pilgrimage from Texas to California. All the while, Horace Greeley’s advice was ringing in my mind: “go West young man and grow up with the country.” I have been at WibiData for three weeks now and I am absolutely loving the team and the environment. I am becoming a stronger coder by the day and enjoying startup life to the fullest. During my first week, I got a full refresher of coding practices and version control; during my second week, I learned to juggle (literally); and during my third, I was in NYC for Garrett’s WibiWorkshop rubbing shoulders with Big Data enthusiasts at large corporations. However, I am a little disappointed about my newly developed addiction for root beer (free food and drinks at work do not help). I plan to continue on this trajectory (minus the root beer) to become a very strong hacker and teammate. As part of the WibiData team, I hope to become an integral part of the Big Data revolution and to build the sort of cutting-edge machine learning and data mining technologies that the world has yet to see.
Request a Demo
WibiRetail is the first out-of-the-box platform designed for retailers to rapidly deploy and own algorithmically-driven personalized shopping experiences. WibiRetail lets retailers maintain their valuable data and IP behind their experiences and arms them with state-of-the-art data infrastructure, machine-learning models, experiment frameworks, and simple interfaces for next-generation personalization, without needing to build these tools in-house. WibiRetail makes Big Data usable, actionable and profitable.