Introduction to Big Data

Data Science is similar to physics: it attemps to create theories of realities based on some formalism that another science brings. For physics it was mathematics, for data science it is computer science. Data has grown expeditiously in these last years and has reached a distance that in metres is the distance to Jupiter. The galaxy is in the order of magnitude of 400 Yottametres, which has $3 \cdot 8$ zeros following after it. So quite a lot. We don’t know if the magnitude of the data will grow this fast but certainly we need to be able to face this case. ...

10 min · Xuanqiang 'Angelo' Huang

Introduction to Natural Language Processing

The landscape of NLP was very different in the beginning of the field. “But it must be recognized that the notion ‘probability of a sentence’ is an entirely useless one, under any known interpretation of this term 1968 p 53. Noam Chomsky. Probability was not seen very well (Chomsky has said many wrong things indeed), and linguists were considered useless. Recently deep learning and computational papers are ubiquitous in major conferences in linguistics, e.g. ACL. ...

2 min · Xuanqiang 'Angelo' Huang

Kalman Filters

Here is a historical treatment on the topic: https://jwmi.github.io/ASM/6-KalmanFilter.pdf. Kalman Filters are defined as follows: We start with a variable $X_{0} \sim \mathcal{N}(\mu, \Sigma)$, then we have a motion model and a sensor model: $$ \begin{cases} X_{t + 1} = FX_{t} + \varepsilon_{t} & F \in \mathbb{R}^{d\times d}, \varepsilon_{t} \sim \mathcal{N}(0, \Sigma_{x})\\ Y_{t} = HX_{t} + \eta_{t} & H \in \mathbb{R}^{m \times d}, \eta_{t} \sim \mathcal{N}(0, \Sigma_{y}) \end{cases} $$Inference is just doing things with the Gaussians. One can interpret the $Y$ to be the observations and $X$ to be the underlying beliefs about a certain state. We see that the Kalman Filters satisfy the Markov Property, see Markov Chains. These independence properties allow a easy characterization of the joint distribution for Kalman Filters: ...

3 min · Xuanqiang 'Angelo' Huang

Kernel Methods

As we will briefly see, Kernels will have an important role in many machine learning applications. In this note we will get to know what are Kernels and why are they useful. Intuitively they measure the similarity between two input points. So if they are close the kernel should be big, else it should be small. We briefly state the requirements of a Kernel, then we will argue with a simple example why they are useful. ...

9 min · Xuanqiang 'Angelo' Huang

Language Models

In order to understand language models we need to understand structured prediction. If you are familiar with Sentiment Analysis, where given an input text we need to classify it in a binary manner, in this case the output space usually scales in an exponential manner. The output has some structure, for example it could be a tree, it could be a set of words etc… This usually needs an intersection between statistics and computer science. ...

2 min · Xuanqiang 'Angelo' Huang

Linear Regression methods

We will present some methods related to regression methods for data analysis. Some of the work here is from (Hastie et al. 2009). This note does not treat the bayesian case, you should see Bayesian Linear Regression for that. Problem setting $$ Y = \beta_{0} + \sum_{j = 1}^{d} X_{j}\beta_{j} $$We usually don’t know the distribution of $P(X)$ or $P(Y \mid X)$ so we need to assume something about these distributions. ...

9 min · Xuanqiang 'Angelo' Huang

Log Linear Models

Log Linear Models can be considered the most basic model used in natural languages. The main idea is to try to model the correlations of our data, or how the posterior $p(y \mid x)$ varies, where $x$ is our single data point features and $y$ are the labels of interest. This is a form of generalization because contextualized events (x, y) with similar descriptions tend to have similar probabilities. These kinds of models are so common that it has been discovered in many fields (and thus assuming different names): some of the most famous are Gibbs distributions, undirected graphical models, Markov Random Fields or Conditional Random Fields, exponential models, and (regularized) maximum entropy models. Special cases include logistic regression and Boltzmann machines. ...

5 min · Xuanqiang 'Angelo' Huang

Markov Processes

Andiamo a parlare di processi Markoviani. Dobbiamo avere bene a mente il contenuto di Markov Chains prima di approcciare questo capitolo. Markov property Uno stato si può dire di godere della proprietà di Markov se, intuitivamente parlando, possiede già tutte le informazioni necessarie per predire lo stato successivo, ossia, supponiamo di avere la sequenza di stati $(S_n)_{n \in \mathbb{N}}$, allora si ha che $P(S_k | S_{k-1}) = P(S_k|S_0S_1...S_{k - 1})$, ossia lo stato attuale in $S_{k}$ dipende solamente dallo stato precedente. ...

12 min · Xuanqiang 'Angelo' Huang

Markup

Introduzione alle funzioni del markup 🟩 La semantica di una parola è caratterizzata dalla mia scelta (design sul significato). Non mi dice molto, quindi proviamo a raccontare qualcosa in più. Definiamo markup ogni mezzo per rendere esplicita una particolare interpretazione di un testo. In particolare è un modo per esplicitare qualche significato. (un po’ come la punteggiatura, che da qualche altra informazione oltre le singole parole, rende più chiaro l’uso del testo). ...

8 min · Xuanqiang 'Angelo' Huang

On The Double Descent Phenomenon

Double descent is a striking phenomenon in modern machine learning that challenges the traditional bias–variance tradeoff. In classical learning theory, increasing model complexity beyond a certain point is expected to increase test error because the model starts to overfit the training data. However, in many contemporary models—from simple linear predictors to deep neural networks—a second descent in test error emerges as the model becomes even more overparameterized. At its core, the double descent curve can be understood in three stages. In the first stage, as the model’s capacity increases, the error decreases because the model is better able to capture the underlying signal in the data. As the model approaches the interpolation threshold—where the number of parameters is roughly equal to the number of data points—the model fits the training data exactly. This exact fitting, however, makes the model extremely sensitive to noise, leading to a spike in test error. Surprisingly, when the model complexity is increased further into the highly overparameterized regime, the training algorithm (often stochastic gradient descent) tends to select from the many possible interpolating solutions one that exhibits desirable properties such as lower norm or smoothness. This implicit bias toward simpler, more generalizable solutions causes the test error to decrease again, producing the second descent. ...

3 min · Xuanqiang 'Angelo' Huang