Future of Quality Engineering with Machine Learning

Hemanshu Chauhan
3 min readSep 1, 2019

--

A bit of history

The Colossus was the first electric programmable computer, developed by Tommy Flowers, and was first demonstrated in December 1943. The Colossus was created to help the British code breakers read encrypted German messages.

The term “Machine Learning” was coined by AI pioneer Arthur Samuel, an engineer at MIT in 1959. But it all started well before the computer, since many mathematical foundations of modern ML come from statistics. Thus, works such as Thomas Bayes and Pierre-Simon Laplace (Bayes Theorem — 1812), Adrien-Marie Legendre (Least Squares Method — 1805) and Andrey Markov (Markov Chains — 1913) are essential for ML.

Marvin Minsky and Dean Edmonds built the first artificial neural network in 1951. The project proposed a computational model inspired by an animal’s central nervous system (particularly the brain) that is capable of ML as well as pattern recognition.

In 1997, Deep Blue, an IBM computer, beat world chess champion Garry Kasparov. The machine worked by researching 6 to 20 moves ahead in each position, having learned by evaluating thousands of old chess games to determine the path to checkmate.

Choosing your Data Science adventure

Machine Learning Use Cases in Quality Engineering

  1. Automatic Test Generation — Spidering AI (Write automatic test cases)
  2. Selective Test Execution— Running more automated tests that matter
  3. Intelligent visual testing e.g. Applitools
  4. AI Powered test reports e.g. Reportportal.io
  5. Defect Prediction
  6. Smart Test Optimization & Prioritization

And many more…

Some of the main approaches

  1. Supervised learning
  2. Unsupervised learning
  3. Reinforcement Learning
  4. Deep Learning
  5. Machine Learning

How to do it

Through:

  • Regression
  • Classification
  • Clustering

Regression: It predicts continuous valued output. The Regression analysis is the statistical model which is used to predict the numeric data instead of labels. It can also identify the distribution trends based on the available data or historic data. Predicting a person’s income from their age, education is example of regression task.

Classification: It predicts discrete number of values. In classification the data is categorized under different labels according to some parameters and then the labels are predicted for the data. Classifying emails as either spam or not spam is example of classification problem.

Clustering: Clustering is the task of partitioning the dataset into groups, called clusters. The goal is to split up the data in such a way that points within single cluster are very similar and points in different clusters are different. It determines grouping among unlabelled data.

Regression

If you have a regression problem “which is predicting a continuous value like predicting prices of a house given the features of the house like size, number of rooms, etc”.

Accurate but slow

Fast

Classification

If you have a classification problem “which is predicting the class of a given input”.

Slow but accurate

Fast

Clustering

If you have a clustering problem “which is dividing the data into k groups according to their features such that objects in the same group have some degree of similarity”.

Hierarchical Clustering

  • Agglomerative: This is a “bottom-up” approach: each observation starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy.
  • Divisive: This is a “top-down” approach: all observations start in one cluster, and splits are performed recursively as one moves down the hierarchy.

Non-hierarchical Clustering

Other Useful ML Algorithms

Some of the AI/ML based Testing tools

  • Applitools
  • Functionlize
  • MABL
  • Retest
  • ReportPortaIO
  • Sealights
  • Testim
  • Test.AI

--

--

Responses (1)