Skip to content

weihangChen/mfbm_classification_tensorflow

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

46 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

mfbm_classification_tensorflow

Communication Protocol Summary

The task is about building a communication protocol, given following user senario as example

  • sender generate number sequence representing 64-characters. (ex. "a" => "0.001, 0.0023...", "b" => "0.003, 0.4..." ) and send the sequences to receiver
  • receiver decode the sequence back to 64-characters

Brownian motion can be used to generate correlated random numbers and the generated sequence is different at each realization. The correlation is derived from the covariance matrix and it makes classification possible, hence it is used in the sender side to generate sequence. What is more interesting is that we used to apply machine learning technique to gathered data, in this task, the data is manufactured. At the receiver side, we use CNN + Tensorflow to build the sequence classifier. Other functional requirement is that, ex. the difference between hurst value should not be over 0.5, so the generated sequences can not be distinguished easily.

Two solutions are discussed in the second section, “correlated random number generation” and “machine learning training” are discussed in section three and four. Here are two simple videoes showing the encode, classify and decode process, first video is done in fbm and the second is done in mfbm.

fbm

mfbm

Solution proposals

Proposal1

Each alphabet is mapped to three digits and each digit is further mapped to one Hurst value ("0" corresponds to "H0.7", "1" corresponds to "H0.75" and "2" correponds to "H0.8"). For example, “a”=>”000”=>”H0.7/H0.7/H0.7”, “f”=>”012”=>”H0.7/H0.75/H0.8”. When alphabet “a” is sent, three Brownian motion random walk realizations using Hurst value 0.7 occur, and 256 * 3 digits in total is sent. The classifier classifies each 256 digits and get “000”, hence “a” is obtained. In this solution, Hurst value in each realization remains the same. With 256 as sequence length, we achieve a 95% accuracy on this solution with 3 labels.

Hurst 0.8: Figure - 0.8

Hurst 0.4: Figure - 0.4

Proposal2

Here we utilize the characteristics of multi fractional Brownian motion where Hurst value is dynamic and each unique Hurst function corresponds to one label.

multifractional realization for ramp function:

def h1(t):
    if t <= 0.5:
        return 0.2
    if 0.5 < t <= 0.7:
        return 0.2 + (t - 0.5)
    return 0.4

Figure - 0.9

mfbm realization for ramp function:

def h2(t):
    if t <= 0.5:
        return 0.4
    if 0.5 < t <= 0.7:
        return 0.4 - (t - 0.5)
    return 0.25

Figure - 1.9

Generate correlated random digit sequence

This wiki page and youtube video provide a great overview of the whole process. This github project contains the implementation for three different fbm methods – “daviesharte”, “cholesky” and “hosking” + one mfbm method - "riemannliouville". (ref1)

  • Step1: Generate random numbers from Gaussian distribution (ref1, ref2)

  • Step2: create symmetric, positive-definite covariance matrix using Hurst value other than 0.5. (ref1, ref2,ref3,ref4)

  • Step3: Generate the square root (standard deviation) matrix of covariance matrix from step2 using Cholesky decomposition and pick the lower left one. (ref1, ref2,ref3,ref4 - geometry visualization)

  • Step4: multiply random numbers (generated from step1) as single column vector with the standard deviation matrix (generated from step3).

Obviously, there is math formular for that, but if you don't understand the part with covariance matrix, you probably won't figure out why classification is possible in this task, which is the tricky part. Otherwise everything is inside of this wonderful project

Training and Classification

Hurst value estimator (paper1, paper2), ARIMA models or the implementation in this project is a good fit for manual feature extraction, but I decided to go for unsupervised feature extraction + classification using CNN and Tensorflow. These resources (ref1, ref2 and ref3) analyze time series data to solve classification problem or predict future value.

Solution1 - fbm: there are only three labels, realizations are made using Hurst0.7, Hurst0.75 and Hurst0.8. 10000 samples with sequence length 256 for each Hurst value are sampled as training and validation dataset; and the trained accuracy on validation dataset is 93% - 98%

Figure - 11

Solution2 -mfbm: Sequence length is 600 and deep learning architecture remains the same, when Hurst value difference is lower than 0.05, achieved accuracy only lands on 70%. Once we let it goes up to 0.07 then the accuracy lands on 99% with 3 labels

Figure - 12

Source Code

There are two packages - "fbmsolution" and "mfbmsolution", each package contains two entry files "solution.cs" and "training.cs". They can be run seperately to debug through the code base. Note: Tensorflow installation is required

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages