You may have heard your friend talking about Object Detection and perhaps utter the words like YOLO and Faster-RCNN. If you’ve wondered what YOLO meant and why you should even care about it look no further.
This blog will provide an exhaustive study of YOLOv3 (You only look once, version 3), which is one of the most popular deep learning models extensively used for object detection, semantic segmentation, and image classification.
I’ll explain the architecture of YOLOv3 model, with its different layers, and we’ll see some results for object detection that I got while running the inference program on some…
Tensorflow is a deep learning library that makes building and deploying Deep Learning Applications super easy. If you wondered what this library is all about wait no more, keep reading the blog to find what makes Tensorflow unique.
This blog provides an overview to the Tensorflow library and provides a brief introduction to the topic with some important keywords, it’s installation and demo code.
In this blog we’ll try to understand one of the most popular tools used to containerize and deploy applications over the internet i.e. Docker. It makes deploying applications extremely simple.
We will try to look at the things that make Docker so special and learn how you can build, deploy, and fetch applications using Docker & Docker Hub using just a few steps.
Support Vector Machines (SVMs) are a set of supervised learning methods which learn from the dataset and can be used for both regression and classification. An SVM is a kind of large-margin classifier: it is a vector space based machine learning method where the goal is to find a decision boundary between two classes that is maximally far from any point in the training data.
The term Support Vectors refers to the co-ordinates of individual observation. Support Vector Machine is a frontier which best segregates the two classes using a hyperplane/ line.
In this blog we’ll try to dig deeper into Random Forest Classification. Here we will learn about ensemble learning and will try to implement it using Python.
You can find the code over here.
It is an ensemble tree-based learning algorithm. The Random Forest Classifier is a set of decision trees from randomly selected subset of training set. It aggregates the votes from different decision trees to decide the final class of the test object.
Ensemble algorithms are those which combines more than one algorithms of same or different kind for classifying objects. …
Logistic Regression is a Supervised learning algorithm widely used for classification. It is used to predict a binary outcome (1/ 0, Yes/ No, True/ False) given a set of independent variables. To represent binary/ categorical outcome, we use dummy variables.
Logistic regression uses an equation as the representation, very much like linear regression. It is not much different from linear regression, except that a Sigmoid function is being fit in the equation of linear regression.
Simple Linear and Multiple Linear Regression Equation:
y = b0 + b1x1 + b2x2 + ... + e
Sigmoid function :
p = 1 /…
A Decision Tree is a simple representation for classifying examples. It is a Supervised Machine Learning where the data is continuously split according to a certain parameter.
In this blog, we’ll talk about one of the most widely used machine learning algorithms for classification, which is the K-Nearest Neighbors (KNN) algorithm. K-Nearest Neighbor (K-NN) is a simple, easy to understand, versatile and one of the topmost machine learning algorithms that find its applications in a variety of fields.
In this blog we’ll try to understand what is KNN, how it works, some common distance metrics used in KNN, its advantages & disadvantages along with some of its modern applications.
K-NN is a non-parametric and lazy learning algorithm. Non-parametric means there is no assumption for underlying data distribution…
In this blog we’ll try to understand one of the most important algorithms in machine learning i.e. Random Forest Algorithm. We will try to look at the things that make Random Forest so special and will try to implement it on a real life dataset.
The Code along with the dataset can be found here.
An Ensemble method is a technique that combines the predictions from multiple machine learning algorithms together to make more accurate predictions than any individual model. A model comprised of many models is called an Ensemble model.
We’re going to be implementing Linear Regression on the ‘Boston Housing’ dataset.
The Boston data set contains information about the different houses in Boston. There are 506 samples and 13 feature variables in this dataset. Our aim is to predict the value of prices of the house using the given features.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
data = pd.read_csv("boston.csv")
To get basic details about our Boston Housing dataset like null values or missing values, data types etc. we can use .info() as shown below:
RangeIndex: 506 entries, 0 to 505