Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Friendly hello - pingback #14

Closed
vackosar opened this issue Jul 12, 2017 · 12 comments
Closed

Friendly hello - pingback #14

vackosar opened this issue Jul 12, 2017 · 12 comments

Comments

@vackosar
Copy link

Hello!

I was inspired by your project and created simplified alternative. I cited you in the readme: https://github.com/vackosar/keras-punctuator

Let me know if you find this interesting.

Vaclav

@colinskow
Copy link

WOW! I never would have thought to try a Conv1D model. 92% precision is impressive. Although you are using binary punctuation (yes or no) verses 8 categories. I'll try this model out in addition to my other experiments (such as using POS tags).

How many epochs did it take to get those results?

@colinskow
Copy link

This is very promising. I plugged this rough architecture into EuroParl with 8 punctuation categories and am getting 83% precision, 48% recall after just 5 epochs. It is training 50x faster than @ottokart's model -- which gives me freedom to experiment and get results much faster. I'll post the results after some experimenting.

@ottokart
Copy link
Owner

Hi!

that's definitely very interesting, thanks for sharing! 50x speedup is really impressive.
Do you include the no-punct category in your precision/recall/f-score computations?
If so, then what are the scores without no-punct?

Best,
Ottokar

@colinskow
Copy link

Hi Ottokar,

I am using the TensorFlow Estimator API since it is very efficient and designed to take models into large-scale distributed production.

It works well with large volumes of static data. Training went perfectly. But I am running into an issue since predicting punctuation needs to be done one line at a time since we are using EOS tokens from the previous results to partition.

I need to figure out how to create an input_fn for estimator.predict which will accept one line at a time asynchronously.

I've done some manual testing and results are very promising...

i hope the commission and the council will come forward with proposals on this matter .PERIOD mr president ,COMMA like so many others ,COMMA i want to congratulate the irish presidency on the success of its term of office and to say that ,COMMA because smaller countries have fewer resources ,COMMA the success which they achieve ,COMMA therefore deserves greater commendation .PERIOD i want to compliment mr bruton ,COMMA the taoiseach ,COMMA mr spring ,COMMA the tánaiste and mr mitchell ,COMMA all of whom worked extremely hard and contributed immensely to that success .PERIOD i want also to acknowledge today that it was not just while mr bruton was president-in-office ,COMMA that he was dedicated to the european ideal .PERIOD

@ottokart
Copy link
Owner

This looks very nice indeed.

@vackosar
Copy link
Author

I didn't measure f-scores without no-punct. I am very sceptical regarding the precision of the app.

BTW, if you like the project you can link back to it in your readme.

@colinskow
Copy link

I got it working (after hacking the TensorFlow prediction function)!!!

PUNCTUATION PRECISION RECALL F-SCORE
,COMMA 58.2 62.1 60.1
.PERIOD 73.0 59.4 65.5
?QUESTIONMARK 58.9 11.2 18.9
!EXCLAMATIONMARK 75.0 8.8 15.7
:COLON 53.4 25.0 34.1
;SEMICOLON 46.1 10.7 17.4
-DASH 56.9 9.7 16.6
Overall 63.7 57.6 60.5

It doesn't reach the full accuracy of your model, but still quite impressive for training 40-50x faster. It peaked at about 17 epochs (a few hours on my CPU) but I didn't have early stopping on and lost the checkpoint (peak precision was a few percent higher). There's also definite room for improvement by tuning hyper-parameters.

@ottokart
Copy link
Owner

@vackosar There's and alternatives section in the readme now that refers to your work as well.

@nshmyrev
Copy link

nshmyrev commented Oct 28, 2017

The problem is that the precision/recall calculation in keras-punctuator is simply wrong. The real numbers are much lower and thats why you get impression it trains faster. The overall approach with CNN is about the same as CNN-2A from X. Che, C. Wang, H. Yang, and C. Meinel, “Punctuation predic- tion for unsegmented transcript based on word vector" paper also referenced in punctuator2 paper and the numbers match the results in the punctuator2 paper table 2 (about 54% overall f-score instead of 64% for punctuator2)

@vackosar
Copy link
Author

I would definitely like to fix the precision/recall calculation. I agree that there is obviously something wrong with it.
On the other hand the model does seem to converge faster to it's best possible result - which is worse than with the other network.

@vackosar
Copy link
Author

vackosar commented Aug 5, 2018

@nshmyrev interesting article about RNN vs Feed Forward http://www.offconvex.org/2018/07/27/approximating-recurrent/

@vackosar vackosar closed this as completed Aug 5, 2018
@nshmyrev
Copy link

@vackosar this article has multiple factual mistakes and misinterpretations. For example take:

Upon publication, the feed-forward, autoregressive WaveNet was a substantial improvement over LSTM-RNN parametric models.

Wavenet is a different codec operated on sample-by-sample basis thats why it got more quality than previous vocoder-based architectures, they do not compare apples to apples here.

On the Billion Word Benchmark, an intriguing Google Technical Report suggests an LSTM n-gram model with n=13 words of memory is as good as an LSTM with arbitrary context.

Actually perplexity difference is significant. 46 vs 43 is a meaningful difference in many applications.

And so forth.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants