Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in Sindh University Research Journal (SURJ) 45 (3) (2013): 7-10, 2013
This paper is about the number 1. The number 2 is left for future work.
Download here
Published in Proceedings of the 6th International Conference on Computing, Electronic and Electrical Engineering (ICE Cube). IEEE, 2016
This paper is about the number 1. The number 2 is left for future work.
Download here
Published in International Journal of Advanced Computer Science and Applications(ijacsa), Volume 7 Issue 11, 2016., 2017
This paper is about the number 1. The number 2 is left for future work.
Download here
Published in Sukkur IBA Journal of Computing and Mathematical Sciences 1.2 (2017): 48-56, 2017
This paper is about the number 1. The number 2 is left for future work.
Download here
Published in Sukkur IBA Journal of Computing and Mathematical Sciences 1.1 (2017): 114-121., 2017
This paper is about the number 1. The number 2 is left for future work.
Download here
Published in Proceedings of the Australasian Joint Conference on Artificial Intelligence. Springer, Cham, 2019
This paper is about the number 1. The number 2 is left for future work.
Download here
Published in Proceedings of the Australasian Joint Conference on Artificial Intelligence. Springer, Cham, 2020
This paper is about the number 1. The number 2 is left for future work.
Download here
Published in Proceedings of the Australasian Joint Conference on Artificial Intelligence. Springer, Cham, 2022
This paper is about the number 1. The number 2 is left for future work.
Download here
Published:
In this tutorial, I introduce the machine learning fundamentals and classification algorithms in details. I also demonstrated applying Naive Bayes, Decision Tree, and Random Forest algorithms using [WEKA] (https://www.cs.waikato.ac.nz/ml/weka/) (Waikato Environment for KNowledge Analysis) tool on simple dataset.
Published:
Abstract: Neural networks have been successfully used as classification models yielding state-of-the-art results when trained on a large number of labeled samples. These models, however, are more difficult to train successfully for semi-supervised problems where small amounts of labeled instances are available along with a large number of unlabeled instances. This work explores a new training method for semi-supervised learning that is based on similarity function learning using a Siamese network to obtain a suitable embedding. The learned representations are discriminative in Euclidean space, and hence can be used for labeling unlabeled instances using a nearest-neighbor classifier. Confident predictions of unlabeled instances are used as true labels for retraining the Siamese network on the expanded training set. This process is applied iteratively. We perform an empirical study of this iterative self-training algorithm. For improving unlabeled predictions, local learning with global consistency [22] is also evaluated.
Published:
Abstract: Deep neural networks produce state-of-the-art results when trained on a large number of labeled examples but tend to overfit when small amounts of labeled examples are used for training. Creating a large number of labeled examples requires considerable resources, time, and effort. If labeling new data is not feasible, so-called semi-supervised learning can achieve better generalisation than purely supervised learning by employing unlabeled instances as well as labeled ones. The work presented in this paper is motivated by the observation that transfer learning provides the opportunity to potentially further improve performance by exploiting models pretrained on a similar domain. More specifically, we explore the use of transfer learning when performing semi-supervised learning using self-learning. The main contribution is an empirical evaluation of transfer learning using different combinations of similarity metric learning methods and label propagation algorithms in semi-supervised learning. We find that transfer learning always substantially improves the model’s accuracy when few labeled examples are available, regardless of the type of loss used for training the neural network. This finding is obtained by performing extensive experiments on the SVHN, CIFAR10, and Plant Village image classification datasets and applying pretrained weights from Imagenet for transfer learning.
Published:
Abstract: Self-training is a simple semi-supervised learning approach: Unlabelled examples that attract high-confidence predictions are labelled with their predictions and added to the training set, with this process being repeated multiple times. Recently, self-supervision—learning without manual supervision by solving an automatically-generated pretext task—has gained prominence in deep learning. This paper investigates three different ways of incorporating self-supervision into self-training to improve accuracy in image classification: self-supervision as pretraining only, self-supervision performed exclusively in the first iteration of self-training, and self-supervision added to every iteration of self-training. Empirical results on the SVHN, CIFAR-10, and PlantVillage datasets, using both training from scratch, and Imagenet-pretrained weights, show that applying self-supervision only in the first iteration of self-training can greatly improve accuracy, for a modest increase in computation time.
Undergraduate course, The Shaikh Ayaz University, Department of CS, 2022
This is a draft course description for CE-08: Data Mining taught in Spring 2022.
Undergraduate course, The Shaikh Ayaz University, Department of CS, 2022
This is a draft course description for ITC-05: System and Network Administration taught in Spring 2022.
Undergraduate course, The Shaikh Ayaz University, Department of CS, 2022
This is a draft course description for CE-02: Computer Graphicse taught in Fall 2022.
Undergraduate course, The Shaikh Ayaz University, Department of CS, 2022
This is a draft course description for CE-09: Natural Language Processinge taught in Fall 2022.
Undergraduate course, The Shaikh Ayaz University, Department of CS, 2023
This is a draft course description for CSC-06: Artificial Intelligence taught in Spring 2023.
Undergraduate course, The Shaikh Ayaz University, Department of CS, 2023
This is a draft course description for CE III: Machine Learning taught in Spring 2023.
Undergraduate course, The Shaikh Ayaz University, Department of CS, 2023
This is a draft course description for CSC-04: Design & Analysis of Algorithms taught in Fall 2023.
Undergraduate course, The Shaikh Ayaz University, Department of CS, 2023
This is a draft course description for CE III: Deep Learning taught in Fall 2023.