
From Shannon's Channel to Semantic Channel via New Bayes' Formulas for Machine Learning
A group of transition probability functions form a Shannon's channel whe...
read it

The CM Algorithm for the Maximum Mutual Information Classifications of Unseen Instances
The Maximum Mutual Information (MMI) criterion is different from the Lea...
read it

From Bayesian Inference to Logical Bayesian Inference: A New Mathematical Frame for Semantic Communication and Machine Learning
Bayesian Inference (BI) uses the Bayes' posterior whereas Logical Bayesi...
read it

Making Classifier Chains Resilient to Class Imbalance
Class imbalance is an intrinsic characteristic of multilabel data. Most...
read it

GMMLIC: Graph Matching based MultiLabel Image Classification
MultiLabel Image Classification (MLIC) aims to predict a set of labels ...
read it

Information Theoretic Security for SideChannel Attacks to the Shannon Cipher System
We study sidechannel attacks for the Shannon cipher system. To pose sid...
read it

Channels' Confirmation and Predictions' Confirmation: from the Medical Test to the Raven Paradox
After long arguments between positivism and falsificationism, the verifi...
read it
Semantic Channel and Shannon's Channel Mutually Match for MultiLabel Classification
A group of transition probability functions form a Shannon's channel whereas a group of truth functions form a semantic channel. Label learning is to let semantic channels match Shannon's channels and label selection is to let Shannon's channels match semantic channels. The Channel Matching (CM) algorithm is provided for multilabel classification. This algorithm adheres to maximum semantic information criterion which is compatible with maximum likelihood criterion and regularized least squares criterion. If samples are very large, we can directly convert Shannon's channels into semantic channels by the third kind of Bayes' theorem; otherwise, we can train truth functions with parameters by sampling distributions. A label may be a Boolean function of some atomic labels. For simplifying learning, we may only obtain the truth functions of some atomic label. For a given label, instances are divided into three kinds (positive, negative, and unclear) instead of two kinds as in popular studies so that the problem with binary relevance is avoided. For each instance, the classifier selects a compound label with most semantic information or richest connotation. As a predictive model, the semantic channel does not change with the prior probability distribution (source) of instances. It still works when the source is changed. The classifier changes with the source, and hence can overcome classimbalance problem. It is shown that the old population's increasing will change the classifier for label "Old" and has been impelling the semantic evolution of "Old". The CM iteration algorithm for unseen instance classification is introduced.
READ FULL TEXT
Comments
There are no comments yet.