The Internet is becoming more and more isolated and fragmented, but this has never been caused by technical issues.
Ironically, this phenomenon is particularly prominent in scientific journals, which is the exact opposite of the concept of openness and cooperation that science claims. A scientist usually spend lots of time in visiting multiple websites for different scientific journals to follow the latest articles in a specific field. An experienced scientist is often familiar with the layout of journal websites and keeps in mind the content style of different articles. But all of this has nothing to do with science.
RSS is an old technology that allows users to access updates of a websites in a standardized format. Contrary to various AI-based recommendation algorithms, this is a way for users to obtain information actively. But due to considerations of income, RSS feed of major websites are not very useful, or even do not exist.
Since I became a freshman in college, I was a heavy user of Google Reader. Even within a few years after it closed, I opened its website occasionally to confirm whether it would reopen someday. During that time, I started using online services like Feed43 and FeedBurner to write RSS rules for scientific journals. At the same time, some Python crawlers are written and deployed locally to fetch the fulltext of some journals, and to generate PDF-format magazines issue by issue.
cover story and new issue notification
Last modified on 2020-03-12