Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

portfolio

publications

CHATEM: Chat Emotion Mapper

Published in Sindh University Research Journal (SURJ) 45 (3) (2013): 7-10, 2013

This paper is about the number 1. The number 2 is left for future work.

Download here

talks

Applied Machine learning using WEKA

Published:

In this tutorial, I introduce the machine learning fundamentals and classification algorithms in details. I also demonstrated applying Naive Bayes, Decision Tree, and Random Forest algorithms using [WEKA] (https://www.cs.waikato.ac.nz/ml/weka/) (Waikato Environment for KNowledge Analysis) tool on simple dataset.

Conference Proceeding talk Semi-supervised learning using Siamese networks

Published:

Abstract: Neural networks have been successfully used as classification models yielding state-of-the-art results when trained on a large number of labeled samples. These models, however, are more difficult to train successfully for semi-supervised problems where small amounts of labeled instances are available along with a large number of unlabeled instances. This work explores a new training method for semi-supervised learning that is based on similarity function learning using a Siamese network to obtain a suitable embedding. The learned representations are discriminative in Euclidean space, and hence can be used for labeling unlabeled instances using a nearest-neighbor classifier. Confident predictions of unlabeled instances are used as true labels for retraining the Siamese network on the expanded training set. This process is applied iteratively. We perform an empirical study of this iterative self-training algorithm. For improving unlabeled predictions, local learning with global consistency [22] is also evaluated.

Conference Proceeding talk Transfer of Pretrained Model Weights Substantially Improves Semi-supervised Image Classification

Published:

Abstract: Deep neural networks produce state-of-the-art results when trained on a large number of labeled examples but tend to overfit when small amounts of labeled examples are used for training. Creating a large number of labeled examples requires considerable resources, time, and effort. If labeling new data is not feasible, so-called semi-supervised learning can achieve better generalisation than purely supervised learning by employing unlabeled instances as well as labeled ones. The work presented in this paper is motivated by the observation that transfer learning provides the opportunity to potentially further improve performance by exploiting models pretrained on a similar domain. More specifically, we explore the use of transfer learning when performing semi-supervised learning using self-learning. The main contribution is an empirical evaluation of transfer learning using different combinations of similarity metric learning methods and label propagation algorithms in semi-supervised learning. We find that transfer learning always substantially improves the model’s accuracy when few labeled examples are available, regardless of the type of loss used for training the neural network. This finding is obtained by performing extensive experiments on the SVHN, CIFAR10, and Plant Village image classification datasets and applying pretrained weights from Imagenet for transfer learning.

Conference Proceeding talk Better Self-training for Image Classification Through Self-supervision

Published:

Abstract: Self-training is a simple semi-supervised learning approach: Unlabelled examples that attract high-confidence predictions are labelled with their predictions and added to the training set, with this process being repeated multiple times. Recently, self-supervision—learning without manual supervision by solving an automatically-generated pretext task—has gained prominence in deep learning. This paper investigates three different ways of incorporating self-supervision into self-training to improve accuracy in image classification: self-supervision as pretraining only, self-supervision performed exclusively in the first iteration of self-training, and self-supervision added to every iteration of self-training. Empirical results on the SVHN, CIFAR-10, and PlantVillage datasets, using both training from scratch, and Imagenet-pretrained weights, show that applying self-supervision only in the first iteration of self-training can greatly improve accuracy, for a modest increase in computation time.

teaching

CE-08: Data Mining (Spring 2022)

Undergraduate course, The Shaikh Ayaz University, Department of CS, 2022

This is a draft course description for CE-08: Data Mining taught in Spring 2022.

CE-02: Computer Graphicse (Fall 2022)

Undergraduate course, The Shaikh Ayaz University, Department of CS, 2022

This is a draft course description for CE-02: Computer Graphicse taught in Fall 2022.

CE III: Machine Learning (Spring 2023)

Undergraduate course, The Shaikh Ayaz University, Department of CS, 2023

This is a draft course description for CE III: Machine Learning taught in Spring 2023.

CE III: Deep Learning (Fall 2023)

Undergraduate course, The Shaikh Ayaz University, Department of CS, 2023

This is a draft course description for CE III: Deep Learning taught in Fall 2023.