-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhancements on the SentimentAnalyzer #1
Comments
@kamranayub - your thoughts are most welcome here 😊 |
I like that! Some kind of pseudo code here:
|
Just for some more data points, I had some review text I was running through the analyzer:
And yet it determined the prediction was One neat thing could be to use the IMDB dataset for tests since it includes the sentiment score (-1 = negative, 0 = neutral, 1 = positive) and test models against it to see how well they match up. |
Yeah I noticed that as well. You are right there's a room of other model training. I will schedule it soon. Meanwhile, what we can collect are the datasets and finalize the categories. Movies - IMDB dataset Above categories are not restricted (they can also be expanded if we want) and also can be trained with more than one dataset. A little to no feature engineering may required which I can handle. |
Hey I would also like to contribute. |
@Wodlfvllf Please go ahead! |
As a part of arafattehsin/CognitiveRocket#3 - an idea is floated around keeping a single library with multiple models under the hood. Each model will have a specific dataset(s) on which it is going to be trained for fine grained sentiment analysis.
User / developer should be able to call the function from by passing a flag (probably Enum?)
For example;
A user analyzing the sentiments for movie reviews will call the Movie Sentiment Analysis Model.
A user looking for the customer reviews sentiment analysis will call the Customers Feedback Analysis Model.
The text was updated successfully, but these errors were encountered: