8. predictions

 


Predictions 

    As social media continues to grow and evolve in the years to come so will regulations of data, algorithms, and content. Recently, governments across the world have begun calling for the regulation of social media platforms. However, it is not just governments that are asking for social media platforms to install regulations, but also society as well especially when it comes to the content that children and minors can access. 

    Social media platforms such as TikTok, which has created algorithms in order to maximize user engagement have been found to have a negative impact on a person's mental health. While these algorithms do help to improve a user's experience on a platform it can also lead to excessive social media usage and increased rates of anxiety, depression, and body image issues. Researchers have found that the algorithms could constantly expose users to harmful content causing the negative mental health. However, it is important to note that people interviewed in surveys have stated that Tik Tok has actually had positive contributions to their mental health. Assistant professor in the College of Science and Engineering at the University of Minnesota, Stevie Chancellor, stated "Our research shows that TikTok helps people find community and mental health information. But people should also be mindful of its algorithm, how it works and when the system is providing them things that are harmful to their wellbeing." 



    So, what are governments around the world planning on doing with these short form videos and platforms that use them such as TikTok and Instagram? Well, they have started to push for transparency laws which will force platforms to explain how their algorithms work. It would also push for users to be able to select "non-algorithmic" feeds. 

    However, it likely that AI regulation will be more likely to expand the most. Especially in the area where AI generated videos and images are concerned. It has becoming increasingly frustrating over the last several years as AI-generated content has taken over social media. It is hard for society to trust in what is factual truth and what was generated on a computer. In the future it can be expected that there will be more regulations and limitations against AI generated content. Some ways that these regulations could be enforced in the future is through labels on AI generated content as well as a disclosure warning when users are interacting with an AI versus a human. By doing this it will help society to not be as acceptable to deception and can help improve trust worthiness in content that is seen on social media. 



    In conclusion, it is unknown where the future is headed for technology but laws such as the EU artificial intelligence act and a growing number of court cases, such as the New York Times vs. Open AI, have helped to take the first steps towards regulation of AI generated content, algorithms, and even data privacy. 








Comments

Popular Posts