The Twitter Memes Dataset was analyzed using PageRank algorithm in a Big Data context. The dataset contains over 96 million documents and over 418 million links between them. To remove dead ends from the graph, two methods were used: recursively removing dead ends or distributing the summation of PageRank from dead ends to every other node. The first method resulted in a cleaner graph with only 1,113,524 nodes, while the second method created a strongly connected graph but also increased noise in the graph. The PageRank algorithm was run on two collections: NO_DEAD_ENDS and ORIGINAL, which had significantly different results due to the presence of dead ends and links to disclaimers. The results showed that pages with the most PageRank are often disclaimers and information pages, not necessarily relevant to users' interests. Bulk insertion speeds were also analyzed, showing a significant drop in time as the batch size increased. However, the speed tapered off beyond 10,000 documents due to the maximum message size limit. The study highlights the importance of considering relevance when using PageRank and the need for optimal indexing methods.