Where is social media headed?

I find these times to be exciting! Almost on a daily basis I am finding people who are using social media in new ways. It appears somewhat haphazard and groping but aspects of social media seem to be developing and slowly taking shape. I believe it is in it’s embryonic stage compared to where it might be 20 years from now. As opposed to advancing technology this seems to be more about personal development and expression. Most of the ills in society can be traced back to people not having their needs met, Having purpose and being valued can be hard to come by in today’s society. Social media seems to have provided some new opportunities to the masses in these areas. The big question to me is how will this play out and evolve. Just from being old and experiencing life I have found if someone realizes an identity that they are happy with and if that identity is contingent on their status in a group they will tend to conform to the group as not to cause the death of their identity. Who will find status in this social media environment? Will we develop new ways of communicating that promote critical thinking? I think the advocates and champions of others have the potential to drive this movement in the right direction and will rise to the top levels of status simply because they are so badly needed. I see social media as a place where we could actually fix our selves. This could have great ramifications in the worlds society. I think we are headed in to the age of collaborating. What are your thoughts on this?

Huh? I thought social media was headed downhill: one one hand you have the corporate interests basically considering social media to be an advertising mechanism and on the other hand you have all kinds of extremists and other nuts using social media to influence others to their crazy positions.

Don’t you think we might just be experiencing growing pains?

As bad as it is already, it’s only going to get worse.

Facebook knows more about their active users than their close friends. Facebook knows who your family is, who you hang out with, what places you go to, what your political opinions are, what you like to eat, what you wear. They know things that you don’t know yourself–like if you’re a little bit racist against some group, because you spend just a little more time looking at headlines with negative statements about that group, even if you don’t actually visit the link.

They are not yet fully capable of processing the data they have available to them. But they have this data on billions of people. More than enough to run experiments on them and figure out exactly what works. And with the advances in deep learning, they can do this across everyone, all at once, and tease out patterns that no human psychologist could have ever discovered.

So far, they are only using the data for advertising and to help foreign powers manipulate your political opinions, but it will get a lot worse. There’s no safe level of exposure to Facebook.

I see that, all that info could be used for good or bad.

Social media is all about collecting your data in order to persuade you to buy shit you don’t need or get you outraged over things that don’t matter. All of supposed benefits to “personal development” and open “expression” are incidental at best and often counterproductive. I see no evidence that any social media platform is doing anything to “promote critical thinking”, and indeed it largely seems to amplify mindless groupthink and sometime dangerous mob mentality event before you consider the promulgation of conspiranoia and the well-documented impact upon the mental health and social development of children and teenagers.

Other than one business-oriented site I have not participated in social media (albeit initially as more of a personal information security issue rather than the more expansive problems) and see little value in it. When Facebook was just a platform for connecting with old friends and family it was mostly harmless (aside from some cyberstalking) but it is now a genuine force for social and political disruption without much upside, and Twitter is strictly an outrage machine however much former CEO Jack Dorsey and its board tried to assure everyone of their progressive bonafides. I actually kind of hope Elon Musk succeeds in his hostile takeover attempt becuae it will illustrate just what a malignancy it actually is on the public discourse, and Musk will likely drive it into insolvency by using it as a platform for the unfettered airing of every vague thought-like vapor going though his head.

All of that said, ‘social media’ as a technology is not going away, and I suspect the next iterations will serve to further draw people in by gamification and induced outrage. It has become entrenched in our social, educational, occupational, and political lives to the extent of being seen as a legitimate channel for public discourse and legal orders, and as long as there is an Internet we’re going to have some version of Facebook, Twitter, Instagram, and probablySmell-O-Net next, so we need to figure out good ways to administer and regulate it that don’t just involved occasional pointless Congressional hearings of CEOs and hoping that it will just all go away.

Stranger

Everything seems to be accelerated on social media so if there’s something bad happening it’ll come to a head pretty quick and hopefully implode. I really believe people will learn a new way of communicating I’m just not sure how long it will take

I think it’s worth considering the different types of social media, because the downsides aren’t the same. Facebook is a data collection behemoth. Twitter, less so, because it’s almost entirely a read-only system–the majority of users don’t tweet at all. Twitter is more like a malevolent casino, where the only goal is to capture your attention as long as possible. Their only useful metric is “engagement”, since that’s what advertisers pay for, and so they optimize their system for doomscrolling.

That’s bad, but Facebook is far worse, IMO, because they can easily manipulate people on an individual level. Probably most are resistant to Twitter’s doomscrolling, and only some fraction of the public will be entrapped. But even people who do not engage with Facebook at all are not outside its influence. Facebook knows who I am, my birthday, my family and even my childhood friends even though I’ve never had an account with them.

I think there’s at least the possibility of regulations against this sort of thing. Though mom isn’t going to be happy when all the pictures she shares of me have my face blurred out for legal reasons.

Pretty much sums-up my view as well. Social media started out innocent enough and with promise, but has turned-out to be rot and poison. The advent of 24-hr news and people using social media rather than traditional media as sources of information, plus the implementation of algorithms that keep pushing the same content in your direction, while filtering out content you should see but wont, builds information bubbles that can be difficult to escape. The experiment has failed thus far, but the product is as strong as ever, and I don’t see it going away any time soon.

I like that and I am going to keep it!

I don’t believe it is social media (a misnomer if there ever was one) that is going to implode: it is society. It actually has already imploded in some places.

We don’t have the time left, too late.

You speak Spanish, don’t you? It is a beautiful parallelism if not.

Well, that is the problem. People aren’t (and neurologically probably really can’t) “learn a new way of communicating”; they use social media the same way they do talking in a bar, except in a bar anyone butting in to express their third person view of your discussion would be considered a disrespectful eavesdropper, whereas on social media like Twitter a simple, slightly conflicted exchange can rapidly turn into a scrum of hatred and bigotry, and of course there are shitbirds laying in wait under their bridges for any opportunity to stir up shit and create an argument out of nothing. And many social media platforms are deliberately architected to appeal to the worst base emotional responses; while the semantic content is in the words, the context of the discussion that would be picked up in body language or tone is now lost in the more constricted medium, and making exaggerated statements or rushing to performative offense is actually rewarded in terms of clicks or likes or ‘re-tweets’. In essence, it compels people to respond like bratty children to optimize for greatest traffic and views, and there is no expectation that people will learn their way around that because then it wouldn’t be as interesting or competitive as a platform.

Facebook certainly has more potential for deliberate malfeasance by the platform owner itself in terms of information collection and utilization (although you’d be surprised of the amount of information that can be collected even by passive users of Instagram or Twitter) but essentially all social media programs become popular and dominate by appealing to base instincts of people to behave in extreme ways that are often harmful to themselves or others. Not that this is new to social media—you can see this in the Opinion pages of a local newspaper from the ‘Sixties, or the people who do stupid shit just to get on America’s Funniest Home Videos—but with social media it is a full time, no vacation from madness pursuit for many people to both say and do extreme things or get outraged at the people who do.

I frankly think Twitter is the absolute king of shit-stirring because of how fast incorrect, offensive, or outrages things can propagate even though Facebook is (probably) the more insidious platform. And the fact that these platforms are now used so widely for ‘official’ business and pronouncements means that there is effectively no opting out even if you aren’t a social media user yourself; you are still wedded to them as a source of common information (even if second-hand through being repeated by a more reliable information source) and the attendant impact they have on society at large. Of course, the intellectual labor they steal, and the physical amount of energy used to sustain them are also not inconsequential impacts.

The even deeper concern is how much people outsource their critical thinking and native skepticism to Facebook, Twitter, et al, in that if they see something on their from a person they think they should trust they automatically believe it even though there is no vetting or verification, and a platform like Twitter actively discourages taking the time to consider what you write before you post it. Even very smart, thoughtful, skeptical people can get caught up in the fury to tweet first and fastest, making statements and spreading misinformation that a moment’s thought would have them reconsider.

Wait until we are doing actual ‘work’ on social media; those who have had to use social-media-like enterprise work platforms know just like Slack know just how much those ‘productivity apps’ can suck away the work time that produces no useful work, and of course once we get to the point of ‘expert systems’ effectively doing a lot of intellectual labor it will cause a further erosion of critical thinking skills and professional expertise. If you think people driving their cars into the ocean due to mindlessly following GPS instructions is ridiculous, wait until you have an expert system advising a physician on diagnosis or an engineer on how to build a good product without thinking through the actual consequences and complications.

Stranger

To the OP- What are you basing your optimism on?

Or they can make you a little bit racist by constantly showing you negative headlines about a group.

It’s easy to jump on the negativity bandwagon regarding social media. Data privacy, fake news, security, excessive commercialization, trolling, cyberbullying, catfishing, depression, narcissism, social isolation, addiction - all legitimate issues and concerns. But I think the truth is somewhere between Dr.Strangelove doom and gloom and HoneyBadgerDC’s rose-colored Oculus glasses.

The flip side is that with advances in AI, analytics, virtual reality, augmented reality, plus better controls for data privacy and cyber security people could have a social media experience that is more tailored, more authentic, less intrusive, and safer.

Then again, I often have trouble understanding the point of social media anyway. On the one hand, I see the benefits of being able to connect with people to gather info on and discuss every imaginable topic. On the other hand, there’s something that doesn’t sit right with me about a significant segment of the economy based on nothing more than people simply vying for each other’s attention.

One thing that I’m sure of is that it will become harder to not use social media at all. Try getting a professional job with no LinkedIn profile. Probably doesn’t matter as much for Gen-X like me or older, but at some point, someone might look like a freakin’ weirdo not asking someone out on a date through an appropriate app.

Yes, I think this is the baseline type of harm that all engagement-based social media commits. It’s bad, but in more of a passive way. People just stay generically dumb when using it.

And while you’re right that you can glean a surprising amount of information just passively, there is just no comparison to what Facebook has available. They could, if they wished, actively manipulate a person’s entire worldview. An abusive, narcissistic, gaslighting spouse doesn’t have a tenth the capability in this regard. I don’t think they’re quite at this level yet, but they’re working on it.

That is a concern, though I’d maybe put it at #2 behind active manipulation. And social media is a relatively small step in the long progression in this direction, from religion to newspapers to radio to TV to the internet.

There is too much information in the world for people to vet everything. I don’t think there’s any choice but to outsource some of it. I can apply critical thinking and my own knowledge to judge trustworthiness of some things people say, and use that as a clue for statements that I can’t immediately evaluate, but that is still not scalable for an individual, and does not work when a person is an expert in one thing but not another.

I would really like to see development in large-scale trust networks, where circles of trust can be built by comparing individual evaluations of various claims, and where individuals can have a “trust rating” relative to each other based on their degree of alignment. They would need to exclude artificial trust circles built by the malicious.

This is a difficult problem, not least because so much information is subjective, but truth does have one significant advantage in that it’s self-consistent. Networks of lies are self-contradictory and can be detected in principle.

It’s hard to say how such a thing would actually work at a large scale, and there’s zero business case for it so it’s even harder to predict who might build it. Still, I can dream.

Slack is in use where I work, but it’s not obligatory. I keep it closed due to the time sink. There is an email notification for new messages, and I almost filed a bug that the notifications take like half an hour to appear… but I think I like it better that way.

USENET is all we needed.

I’m actually more concerned about this in terms of the long-term social and occupational consequences. Of course, it is the perennial complaint of the lack of common sense/critical thinking/decorum of “kids these days”, but subordinating thinking processes to a machine ‘intelligence’ that not only may not have the individual’s best interests ‘at heart’ (so to speak) but may be algorithmically operating in ways that are contrary to the best interest of society at large. It is one thing for malicious actors to use Facebook as essentially a really fast pamphleteering system to spread radical and dangerous ideas far and wide, but far worse that people just give up even doing the intellectual effort to think about consequences.

It’s kind of trite to cite Idiocracy as some serious reflection about a future of humanity where thinking faculties are discounted and suppressed but just as the agricultural and first industrial revolutions essentially eliminated a lot of general knowledge about basic survival and physical conditioning to exist without ‘society’, the revolution in expert ‘thinking systems’ may literally atrophy our collective ability to think critically. It is already evident that many people just accept information without questioning it as long as it comes from some source that is consistent with their worldview (e.g. the ‘information bubbles’ that reinforce misinformation) but I am seriously concerned that people won’t even develop a mature ability to think or consider alternatives, and will end up like spoilt children protected from anything that threatens their worldview.

I think it is even worse that being a “zero business case” for circles of trust; it actually undermines the intent of a lot of users, and I’m not sure it could ever be protected to the point that it couldn’t be used to reject critical or factual information in order to reinforce adverse behavior, at least not without some pretty active and necessarily subjective content moderation. As you say, it’s a nice idea (and mirrors the methods used to vet intelligence information for truth and applicability) but I think it would be nearly impossible to make this work in the general context of an open content system like Facebook.

Regardless, unless there is a public demand for it, many people will opt out of it, which means that platforms will either discount it or figure out ways around it in pursuit of gaining the most attention and users. I don’t think there is any way of fixing that without fundamentally restructuring the human psyche, because we are in essence still apes that have just learned how to use tools and convey abstract information on the evolutionary scale of things, and now we have systems that can affect (or destroy) whole societies in ways that aren’t even readily predictable.

Stranger

My worry is for when they learn a little subtlety. People can tune out obvious radicalism. But what happens when you slowly lead them down a certain path?

Republicans in 2016 used social media to target left-leaning (Black) voters who were susceptible to being convinced to stay home. These weren’t obvious Republican ads. They just tried to put Clinton in a bad light. And so they convinced some number of people that it wasn’t worth voting in that election.

That’s just the most primitive, halting step in the direction of what I’m talking about. What if they could target not just populations, but individuals? Scour their entire social history, feed it through some machine learning mumbo-jumbo, and come up with a factor that is meaningful for just that one person. Look, that close family member was arrested on charges for a law that was enacted when so-and-so was governor. Look, that protest in your city that made you late to work is by the same group of people that so-and-so keeps talking about.

I’m selfish, so I want it for myself. I’d actually want most people to opt out. My mental model is that of a massively enhanced version of scientific peer review.

Peer review of course has serious problems, but it is much better than nothing, let alone the active promulgation of misinformation that appears on social media. It has its own problems with, for instance, fraudulent journal networks. But ultimately these things can be detected, and usually are eventually.

That’s not the only type of gaming that can happen, and if you increase the difficulty, then adversaries will improve their methods. However, overall I think the “good guys” have the advantage, or at the least can maintain their own useful trust network, even if they are surrounded by a sea of misinformation.

Maybe I’m wrong and it’s not possible, but so far I haven’t seen any serious research into the idea. It is, IMO, essentially a graph theory problem, with the principle that you can have an island of connected nodes that represents a self-consistent network, and while there may be other islands of misinformation, and a vast sea of nodes trying to muscle their way in to the trusted island, it should be difficult in a mathematically provable way.

I would think eliminate almost all of the corporate controls, not worry about false information ignore it. Credibility it’s hard to maintain for any length of time even when you’re a very credible person trying your best. The idiots would very quickly fade into the background. Possibly have classes available on moderating and do most of your controlling at the moderation level for certain groups. I belong to a lot of special interest groups that are not associated with Facebook over the past 20 years I have seen changes in people positive changes, that have been astounding mostly do to their association in these groups where they got positive feedback. The potential to address a lot of the hills of society that have their roots and people’s needs not being met is unmistakable.

There’s a remarkable aspect of the current social media environment. One of the few true positives of social media is that (in principle) you can see exactly what a person said, and have very high confidence that it was actually that person. Furthermore, you can see the exact context in which it was said. This is in contrast to traditional media in which you often don’t see direct quotes or that greater context. There’s no mechanism for verifying that the person said such a thing.

And yet–we barely see that being used. There are numerous articles about “so-and-so said a thing”, and often enough people don’t even look at the article to see what was actually said and if the headline was remotely accurate. Plus you’re lucky if the article itself cites the social media posting.

The social media sites themselves undermine the utility here. Popular people find that there is an army of impersonators that pop up using stupid character set tricks to hide themselves. Twitter had made it possible for tweets to be deleted permanently, so even if an article does correctly cite their source, it may be obsolete some time later. They can take a screenshot, but those can be faked. And now, Twitter is adding an edit button which also enables people to rewrite history.

I’d actually support a blockchain-based posting system. It should not be possible to alter history. If someone wants to correct an error, let them, but provide access to the history (Wikipedia does the right thing here). A system that allows changing the past is not trustworthy.

To circle back to the statement I replied to, credibility can only be maintained if there is a persistent identity. If you can’t clearly demonstrate that X said Y, then there is no possibility to have credibility. Social media does have the ability to do this, but in many cases they don’t.

How do you propose to do that? If corporations cannot control their own platform, what incentive do they have to provide one?