Verify the convoliution theorem using Matlab
When using Matlab to verify “Convolution in time domain is multification in frequency domain”, simply doing fft2 is wrong.
When using Matlab to verify “Convolution in time domain is multification in frequency domain”, simply doing fft2 is wrong.
We know that wavelet transform comes from STFT by changing a single windowing function to a set of windowing functions. \(X(t,\omega) = \int_{-\infty}^{\i...
Why use STFT? We wants to know the frequency components at each time point. Solution: we perform FFT on small blocks. \(X(t,\omega) = \int_{-\infty}^{\...
Why use DCT DTFS/FFT (We will not detail into the difference between these two at this moment.) use $e^{jwt}$ as the base. And if $x(t)$ is real and even,...
1. Physiology Source of EEG The first a few pages of “François Grimbert, Olivier Faugeras. Analysis of Jansen’s model of a single cortical column. Researc...
This method is a traditional method for non-linear optimization problem. A simplex is a structure in n-dimensional space formed by n+1 points that are n...
I wrote some notes on how to customize your website. Change the sidebar width: If you want to change the image size: goes to _sass and change i...
Other than the notes for changing _config.yml and _pages folder that are already on the academicpages website, here I write down some notes to further custom...
Comparsion of GPU performance online from Timdettmers
Original post: http://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/
We have observed variable \(x\), latent varivable \(z\) and parameters \(\theta\). The complete log likelihood should be \(l(\theta;x,z) = \log p(x,z | \the...
Clustering is aimed at grouping set of objects that have similar “properities”. There are many different ways to define the distance for clustering. The famo...
Theory: Bound the generalization error Introduction Probably approximately correct (PAC) learning (Valient, 1984) provided a theoretical framework to provide...
(Many are copied from Sun, 2019 Optimization paper)
Using Matlab in Jupyter Notebook Jupyter notebook can combine text and code. When you are study new things and take notes, it could be really helpful. The st...
Using Matlab in Jupyter Notebook Jupyter notebook can combine text and code. When you are study new things and take notes, it could be really helpful. The st...
Original post: http://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/
Taken from https://gist.github.com/StefanoMagrassi/6c78b6f48599377a5c805fd58a5cd93d
Why use DCT DTFS/FFT (We will not detail into the difference between these two at this moment.) use $e^{jwt}$ as the base. And if $x(t)$ is real and even,...
Why use STFT? We wants to know the frequency components at each time point. Solution: we perform FFT on small blocks. \(X(t,\omega) = \int_{-\infty}^{\...
We know that wavelet transform comes from STFT by changing a single windowing function to a set of windowing functions. \(X(t,\omega) = \int_{-\infty}^{\i...
The content below is a translation and summary of the wiki page of Multitaper.
The content below is a translation and summary of the wiki page of Multitaper.
The content below is a translation and summary of the wiki page of Multitaper.
Clustering is aimed at grouping set of objects that have similar “properities”. There are many different ways to define the distance for clustering. The famo...
Clustering is aimed at grouping set of objects that have similar “properities”. There are many different ways to define the distance for clustering. The famo...
Clustering is aimed at grouping set of objects that have similar “properities”. There are many different ways to define the distance for clustering. The famo...
Semi-supervised learning: mixing labeled and unlabeled data as training data (no query for labels during training); based on the belief that data has the ...
Semi-supervised learning: mixing labeled and unlabeled data as training data (no query for labels during training); based on the belief that data has the ...
Semi-supervised learning: mixing labeled and unlabeled data as training data (no query for labels during training); based on the belief that data has the ...
Naive Bayes A linear classifier Generative model: model \(p(X,y)\)
Naive Bayes A linear classifier Generative model: model \(p(X,y)\)
Naive Bayes A linear classifier Generative model: model \(p(X,y)\)
Naive Bayes A linear classifier Generative model: model \(p(X,y)\)
The notes from cmu 17spring-18898:
The notes from cmu 17spring-18898:
The notes from cmu 17spring-18898:
Ringing artifacts
(Summarized from the lecture notes from cmu spring17-10708, Eric Xing)
(Summarized from the lecture notes from cmu spring17-10708, Eric Xing)
Taken from https://gist.github.com/StefanoMagrassi/6c78b6f48599377a5c805fd58a5cd93d