Let’s create a function to capture and display on a plot the sentiment of all 200 last tweets of Donald Trump:def anl_tweets(lst, title='Tweets Sentiment', engl=True ): sents =  for tw in lst: try: st = sentiment_analyzer_scores(tw, engl) sents.append(st) except: sents.append(0) ax = sns.distplot( sents, kde=False, bins=3) ax.set(xlabel='Negative Neutral Positive', ylabel='#Tweets', title="Tweets of @"+title) return sentsThe return of this function is a list with the sentiment score result (-1, 0 or 1) of each individual tweet used as an input parameter.Analyzing tweets with Word CloudAnother interesting quick analysis would be a take a peak on the “Cloud Word” generated from a list of tweets..For that, we will use word_cloud, a little word cloud generator in Python..Read more about it on the blog post or the website.First, install word_cloud:pip install wordcloudNow, let’s create a general function for generating a word cloud from a tweet list:def word_cloud(wd_list): stopwords = set(STOPWORDS) all_words = ' '.join([text for text in wd_list]) wordcloud = WordCloud( background_color='white', stopwords=stopwords, width=1600, height=800, random_state=21, colormap='jet', max_words=50, max_font_size=200).generate(all_words) plt.figure(figsize=(12, 10)) plt.axis('off') plt.imshow(wordcloud, interpolation="bilinear");Now that we have all functions defined, we can replicate this analysis for any group of tweets generated by any tweeter..Let’s try the same for all last 200 tweets of Obama:Streaming tweets for a specific filterThe Twitter streaming API is used to download twitter messages in real time..It is useful for obtaining a high volume of tweets, or for creating a live feed using a site stream or user stream..Tweepy makes it easier to use the twitter streaming API by handling authentication, connection, creating and destroying the session, reading incoming messages, and partially routing messages.The most important parameters in creating a tweet real-time listener:trackA comma-separated list of phrases which will be used to determine what Tweets will be delivered on the stream..A phrase may be one or more terms separated by spaces, and a phrase will match if all of the terms in the phrase are present in the Tweet, regardless of order and ignoring case..By this model, you can think of commas as logical ORs, while spaces are equivalent to logical ANDs (e.g. ‘the twitter’ is the AND twitter, and ‘the,twitter’ is the OR twitter).languageThis parameter may be used on all streaming endpoints, unless explicitly noted.Setting this parameter to a comma-separated list of BCP 47 language identifiers corresponding to any of the languages listed on Twitter’s advanced search page will only return Tweets that have been detected as being written in the specified languages..For example, connecting with language=en will only stream Tweets detected to be in the English language..Other language codes:– en: English– es: Spanish– pt: PortuguesefollowA comma-separated list of user IDs, indicating the users whose Tweets should be delivered on the stream..Following protected users is not supported..For each user specified, the stream will contain:– Tweets created by the user.– Tweets which are retweeted by the user.– Replies to any Tweet created by the user.– Retweets of any Tweet created by the user.– Manual replies, created without pressing a reply button (e.g. “@twitterapi I agree”).locationsA comma-separated list of longitude, latitude pairs specifying a set of bounding boxes to filter Tweets by.. More details
- 7 Data Trends for 2020 (and one non-trend)
- What are Autoencoders? Learn How to Enhance a Blurred Image using an Autoencoder!
- Introducing Databricks Ingest: Easy and Efficient Data Ingestion from Different Sources into Delta Lake
- New Data Ingestion Network for Databricks: The Partner Ecosystem for Applications, Database, and Big Data Integrations into Delta Lake