Receive the Data Science Weekly Newsletter every Thursday

Easy to unsubscribe at any time. Your e-mail address is safe.

Data Science Weekly Newsletter
Issue
341
June 4, 2020

Editor's Picks

  • GPT-3, a Giant Step for Deep Learning And NLP
    A few days ago, OpenAI announced a new successor to their Language Model (LM) - GPT-3. This is the largest model trained so far, with 175 billion parameters. While training this large model has its merits, reading a large portion of 72 pages can be tiresome. In this blog post I’ll highlight the parts that I find interesting for people familiar with LMs, who merely wish to know (most of) the important points of this work...
  • Do I Need to Go to University?
    I’ve been somewhat successful as a researcher without an undergraduate degree or PhD. As a result, I often have people ask me about whether it’s possible to be successful without going to university, whether they personally should, or whether I can help them persuade their parents. At this point, I've probably received around a hundred emails on this topic. It's still hard to know how to respond...what this essay aims to do, is to share thoughts on ways to think about the decision, navigating the social and emotional challenges, and resources you may find helpful...



A Message From This Week's Sponsor



Data scientists are in demand on Vettery

Vettery is an online hiring marketplace that's changing the way people hire and get hired. Ready for a bold career move? Make a free profile, name your salary, and connect with hiring managers from top employers today.


Data Science Articles & Videos

  • Acme: A new framework for distributed reinforcement learning
    We [DeepMind] believe [Reinforcement Learning] offers an avenue for solving some of our greatest challenges: from drug design to industrial and space robotics, or improving energy efficiency in a variety of applications...However, in this pursuit, the scale and complexity of RL programs has grown dramatically over time. This has made it increasingly difficult for researchers to rapidly prototype ideas, and has caused serious reproducibility issues. To address this, we are launching Acme — a tool to increase reproducibility in RL and simplify the ability of researchers to develop novel and creative algorithms...
  • AutoSweep: Recovering 3D Editable Objects from a Single Photograph
    This paper presents a fully automatic framework for extracting editable 3D objects directly from a single photograph...Our work makes an attempt towards recovering two types of primitive-shaped objects, namely, generalized cuboids and generalized cylinders. Qualitative and quantitative experiments show that our algorithm can recover high quality 3D models and outperforms existing methods in both instance segmentation and 3D reconstruction...
  • Penrose: from mathematical notation to beautiful diagrams
    We introduce a system called Penrose for creating mathematical diagrams. Its basic functionality is to translate abstract statements written in familiar math-like notation into one or more possible visual representations. Rather than rely on a fixed library of visualization tools, the visual representation is user-defined in a constraint-based specification language; diagrams are then generated automatically via constrained numerical optimization...
  • Identifying and mitigating liabilities and risks associated with AI
    In this episode of the Data Exchange [Podcast] I speak with Andrew Burt, Chief Legal Officer at Immuta and co-founder and Managing Partner of BNH.ai1, a new law firm focused on AI compliance and related topics. As AI and machine learning become more widely deployed, lawyers and technologists need to collaborate more closely so they can identify and mitigate liabilities and risks associated with AI. BNH is the first law firm run by lawyers and technologists focused on helping companies identify and mitigate those risks...
  • DeepFaceDrawing Generates Photorealistic Portraits from Freehand Sketches
    A team of researchers from the Chinese Academy of Sciences and the City University of Hong Kong has introduced a local-to-global approach that can generate lifelike human portraits from relatively rudimentary sketches...the key idea behind the new approach is to implicitly learn a space of plausible face sketches from real face sketch images and find the point in this space that best approximates the input sketch. Because this approach treats input sketches more as ‘soft’ constraints that will guide image synthesis, it is able to produce high-quality face images with increased plausibility even from rough and/or incomplete inputs...
  • Flows for simultaneous manifold learning and density estimation
    We introduce manifold-learning flows (M-flows), a new class of generative models that simultaneously learn the data manifold as well as a tractable probability density on that manifold. Combining aspects of normalizing flows, GANs, autoencoders, and energy-based models, they have the potential to represent datasets with a manifold structure more faithfully and provide handles on dimensionality reduction, denoising, and out-of-distribution detection. We argue why such models should not be trained by maximum likelihood alone and present a new training algorithm that separates manifold and density updates...
  • Gotchas to getting the TinyML book demos to run
    I’ve been having tremendous fun working with the demos for TinyML book, written by Pete Warden and Daniel Situnayake about the insanely cool notion of empowering ultra-low-power devices with AI for slick edge computing. It’s really neat to be able to deploy machine learning models to a diminutive device and run inferences for practical things like speech recognition and image classification on a microcontroller that’s about the size of my nose...
  • Language Models are Few-Shot Learners
    Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. ...



Training



Quick Question For You: Do you want a Data Science job?

After helping hundred of readers like you get Data Science jobs, we've distilled all the real-world-tested advice into a self-directed course.
The course is broken down into three guides:
  1. Data Science Getting Started Guide. This guide shows you how to figure out the knowledge gaps that MUST be closed in order for you to become a data scientist quickly and effectively (as well as the ones you can ignore)

  2. Data Science Project Portfolio Guide. This guide teaches you how to start, structure, and develop your data science portfolio with the right goals and direction so that you are a hiring manager's dream candidate

  3. Data Science Resume Guide. This guide shows how to make your resume promote your best parts, what to leave out, how to tailor it to each job you want, as well as how to make your cover letter so good it can't be ignored!
Click here to learn more
...
*Sponsored post. If you want to be featured here, or as our main sponsor, contact us!



Jobs

  • Data Scientist - Amazon Demand Forecasting - New York

    The Amazon Demand Forecasting team seeks a Data Scientist with strong analytical and communication skills to join our team. We develop sophisticated algorithms that involve learning from large amounts of data, such as prices, promotions, similar products, and a product's attributes, in order to forecast the demand of over 190 million products world-wide. These forecasts are used to automatically order more than $200 million worth of inventory weekly, establish labor plans for tens of thousands of employees, and predict the company's financial performance. The work is complex and important to Amazon. With better forecasts we drive down supply chain costs, enabling the offer of lower prices and better in-stock selection for our customers...

        Want to post a job here? Email us for details >> team@datascienceweekly.org


Training & Resources

  • Welcome to SMV Tutorial

    You are interested in Support Vector Machine (SVM) and want to learn more about them?...You are in the right place. I created this site in order to share tutorials about SVM...
  • Introduction to Machine Learning & AI lectures by DeepMind and UCL [Videos]

    In this lecture [1 of 12], DeepMind Research Scientist and UCL Professor Thore Graepel explains DeepMind's machine learning based approach towards AI. He examples of how deep learning and reinforcement learning can be combined to build intelligent systems, including AlphaGo, Capture The Flag, and AlphaStar. This is followed by a short introduction to the different topics and speakers coming up in the subsequent lectures...


Books


  • Seven Databases in Seven Weeks:
    A Guide to Modern Databases and the NoSQL Movement

    "A book that tries to cover multiple database is a risky endeavor, a book that also provides hands on on each is even riskier but if implemented well leads to a great package. I loved the specific exercises the authors covered. A must read for all big data architects who don’t shy away from coding..."... For a detailed list of books covering Data Science, Machine Learning, AI and associated programming languages check out our resources page
    .

     

    P.S., Enjoy the newsletter? Please forward it to your friends and colleagues - we'd love to have them onboard :) All the best, Hannah & Sebastian


Easy to unsubscribe at any time. Your e-mail address is safe.