Millie
your market intelligence analyst
Search Results
Edit Save
5,477 results
Towards Data Science 08/21/2019 22:44
Smaller Code, Less Pain. Photo by on. For an NLP task, you might need to tokenize text or build the vocabulary in the pre-processing. And you probably have experienced that the pre-processing code is as messy as your desk. Forgive me if your desk is clean :) I have such experience too. That’s why I create to ease your pain! It will make your “desk” as clean as possible. How does the real code look like? Take a look at the figure below. The pre-processing including tokenization, building the vocabulary, and indexing. I used for this picture. The left part is the example code from the , which does common pre-processing on text data. The right part is written with LineFlow to implement the exact same processing. You should get the idea that how
Towards Data Science 08/21/2019 22:39
An autoencoder toolbox from most basic to most fancy. In the wonderful world of machine learning and artificial intelligence, there exists this structure called an autoencoder. Autoencoders are a type neural network which is part of unsupervised learning (or, to some, semi-unsupervised learning). There are many different types of autoencoders used for many purposes, some generative, some predictive, etc. This article should provide you with a toolbox and guide to the different types of autoencoders. Traditional Autoencoders (AE). The basic autoencoder. The basic type of an autoencoder looks like the one above. It consists of an input layer (the first layer), a hidden layer (the yellow layer), and an output layer (the last layer). The objectiv.
Towards Data Science 08/21/2019 19:47
Bhargava Reddy Morampalli, a microbiologist from India, read my first post on web scraping from my old blog. If you didn’t get a chance to check out that post you can read it here. He reached out to me after reading my post for help on his implementation of web scrapping based on my article on this . Here I will refactor Bhargava’s example as we did in the article above. I figured it is useful to have another web scrapping example since not all sites have the same structure. Made with imgflip.com. **Note: Since websites change their layout (or the staff could change), the results you get could be different. The end result is current as of 8/21/19. For those that read my original version of this article, the site structure has changed, so the.
Towards Data Science 08/21/2019 19:30
Cerebras’ chip can become the de facto chip for Deep Learning. One of the biggest problems with Deep Learning models is that they are becoming too big to train in a single GPU. If the current models were trained in a single GPU, they would take too long. In order to train models in a timely fashion, it is necessary to train these models with multiple GPUs. We to scale training methods to use 100s of GPUs or even 1000s of GPUS. For example, a was able to reduce the ImageNet training time from 2 weeks to 18 minutes, or train the largest and best state of the art Transformer-XL in 2 weeks instead of 4 years. He used 100s of GPUs to do that. As models become bigger, the more processors are needed. Whenever scaling the training of these models in.
Towards Data Science 08/21/2019 19:29
Monte Carlo is a conceptually simple but powerful technique that is widely used. It makes use of randomness to answer questions. In this post, I’ll explain how to solve the using the . The implementation is in python, a programming language whose name is in itself a tribute to the British Comedy Group —. Monty Hall Problem. The first time I was introduced to the Monty Hall problem was in the movie — 21. This clip showcases the problem. There are explanations for the same by and . Let’s jump into the problem — Imagine, you have three doors in front of you. The game show host asks you to choose one of the doors. If you choose the correct door then you win a car otherwise you get a goat. Let’s say you chose Door №1. The game show host who knows w.
Towards Data Science 08/21/2019 19:28
A walk-thru on a Credit Card Fraud Detection Problem. As part of the on the new Kaggle Competition for Credit Card Fraud Detection, I show you how to vastly improve model performance by ensembling a few models together. I also discuss a strategies for handling the class imbalance and splitting the data into train/validation sets. was originally published in on Medium, where people are continuing the conversation by highlighting and responding to this story.
Towards Data Science 08/21/2019 16:44
| |. Joel Grus on the. Editor’s note: This is the first episode of the Towards Data Science podcast “Climbing the Data Science Ladder” series, hosted by Jeremie Harris, Edouard Harris and Russell Pollari. Together, they run a data science mentorship startup called . You can listen to the podcast below:. To most data scientists, the jupyter notebook is a staple tool: it’s where they learned the ropes, it’s where they go to prototype models or explore their data — basically, it’s the default arena for their all their data science work. But Joel Grus isn’t like most data scientists: he’s a former hedge fund manager and former Googler, and author of . He currently works as a research engineer at the , and maintains a . Oh, and he thinks you should.
Towards Data Science 08/21/2019 11:17
Most people in the data science community know Kaggle as a place to learn and grow your skills. One popular way for practitioners to improve is to compete in prediction challenges. For newcomers, it can be overwhelming to jump in and compete on the site in an actual challenge. At least, that’s how I always felt. After sitting on the sidelines, I decided to finally dip my toes into the Kaggle competition arena at the end of 2018. In a short time, I’ve learned and honed many data science skills that I wouldn’t have otherwise been able to practice. To my surprise, I also found that competitions can be a lot of fun — even as a newcomer. In this article, I seek to demystify some of the components of Kaggle competitions that may not be immediatel.
Towards Data Science 08/21/2019 11:17
Bayesian Basketball : was Toronto really the best team during NBA 2019 season ? Let’s go back in time and see if we can end up with a different winner for the NBA 2019 title. How ? By using Bayesian simulations. credit : NYTimes. [This article was inspired by the work of Baio and Blangiardo (2010), Daniel Weitzenfeld’s great blog , and Peadar Coyle’s on Hierarchical models.]. Introduction. Bayesian simulation relies heavily on statistical distributions to model outcomes and therefore serves as a tool for simulating scenarios. At its base lies Bayes’ theorem. Bayes’ formula contains 4 parts : the posterior, the prior, the likelihood, and the evidence. Going into the details of each of it is not the goal of this article, but just keep in mind th.
Towards Data Science 08/21/2019 09:17
Opal Butterfly VR, courtesy of. AI may soon surpass human artistic creativity. and. Summary. Our experience of art — painting, music, sculpture, poetry — taps into multiple senses, associations, memories and emotions, and involves what one may call a conscious component. AI today lacks subjective experience and multimodality but is starting to acquire artistic creativity. Does creativity require the capacity of conscious experience? We assert that the answer is no. Specialized AI may soon become more creative than humans and better at capturing and representing collective experiences, as multimodality, supervised learning and collaboration with humans will significantly enhance today’s creative bots. Much of art is about recognizing and manipu.

Health Care

Health and Wellness

Business Issues

Companies - Public

Companies - Venture Funded

Financial Results

Global Markets

Global Risk Factors

Government Agencies

Insurance

Information Technologies

Job Titles

Legal and Regulatory

Political Entities

Sources

Strategic Scenarios

Trends

Hints:

On this page, you see the results of the search you have run.  You may also view the following:

  •  Click on this drop-down menu on the right hand side of the page, to choose between the machine learning-produced Insights Reports, or the listing of concepts extracted from the results, in chart or list format. 


  •  View the number of search results returned for the search in each of your collections, and click on any of those numbers to view the entire listing of results from the chosen collection.

  •  Use the search adjustment drop-downs to change the scope, sorting, and presentation of your results.

  •  Show or hide the record’s caption (content description).

  •  Show actions that can be made with the search result record.

  •  Click on the Save button after running your search, to save it so that its results will be updated each time relevant new content is added to the designated collection. You may choose to be notified via search alerts.

Click here for more info on Search Results

Click here for more info on Machine Learning applications