ON SOCIAL TOOLS
Slack's Cal Henderson describes a practice within the company's use of its own product: the 'polite raccoon' emoji which directs overly-talkative (writative?) team members to take their off-topic chattering in a Slack channel offline.
One of the downsides of working out loud is, well, it can get loud.
According to a massive 2012 research study of 4,200 companies by McKinsey, 72% reported using social tools to facilitate communication. Paul Leonardi and Tsedal Neeley were struck by those figures and wanted to look into the motivations of those signing up to use these platforms. Mostly they found a lot of me-tooism, and few decisions based on solid business cases.
They decided to run an experiment at a large financial services firm contrasting two groups, one using Jive-n, and the other relying only on conventional tools, like email.
Their results strike me as far too rosy, and sidestep a number of well known problems with brute force adoption of both earlier generations of social tools and today's crop, as well. But take a look, by all means, at What Managers Need to Know About Social Tools.
A small interaction with Jason Fried of Basecamp today on Twitter:
The link is to a Work Futures post on Medium (I haven't ported it here, yet): Progressivity, not Productivity.
I noticed that Nikhil Nulkar (@nikhilnulkar) also linked to that article in response to Jason's tweet. Thanks, Nikhil!
Anil Dash connects the dots between seemingly innocuous choices in CMS systems from the early days of blogging and the resulting algorithmic arms race that has rejiggered the world's media and fused it with the web.
The pea under the mattress is the choice that Google made to favor dashes over underscores in blog post urls, which led the CMS companies to adopt that convention wholesale, for better Google rankings. Anil, despite his surname, favored underscores. But in retrospect, he now sees the lineaments of today's online world:
Google was teaching us that the way to win on the web is to game the algorithms of big companies.
And then, the molehill begat a mountain [emphasis mine]:
In that old era of the social web, the community’s shared knowledge of how to game algorithms was mostly used for harmless things. People would try to get more readers for their personal blogs, or pull off silly stunts like “Google bombing”, which was essentially just playing with getting a certain site to rank high in Google’s results for a particular term. It’s no wonder we thought it was no big deal if we changed our apps to make content that suited Google’s arbitrary rules. None of this stuff mattered that much, right?
But by attaching monetary value to search ranking, what Google ended up catalyzing was a never-ending arms race, where they constantly updated their algorithm and each community on the web constantly tried to learn how to exploit the new mechanics. The stakes of the algorithmic arms race kept going up; instead of being about pulling off silly pranks, understanding how to appease Google became the cornerstone of multi-million-dollar marketing campaigns. Instead of being about one character in a web address, it became about publishing content that suited the algorithm, whether it was true or not. At first, the only people paying attention were nerds making content management systems, then a broader audience of people trying to optimize their search engine positioning.
Eventually, though, movements across the political spectrum came to understand that knowledge of how to appease the algorithms that govern social media had profound social and cultural power. It wasn’t just marketers who figured out the best way to promote their ideas, it was trolls and activists and harassers and people on the fringes who wouldn’t have had any way to get the word out before—both for better and for worse. At that point, the rise of fake media markets was inevitable.
Anil tries to end on a rallying cry, exhorting us to 'hold the big platforms accountable' and to turn the tide.
My worry is that we may have to unravel the entire fabric of the web to rework this massive concentration of power that grew from the coevolution of social platforms and the networks that grew to populate and appropriate them. If we can even get to there from here.
Andre Spicer relates a few tales of dealing with corporate bullshit, like new age off sites where nebulous abstractions and team building exercises waste an afternoon in some hotel conference room. Yawn.
Spicer offers Harry Frankfurt's definition of bullshit:
The philosopher Harry Frankfurt at Princeton University defined bullshit as talk that has no relationship to the truth. Lying covers up the truth, while bullshit is empty, and bears no relationship to the truth.
In my experience, the worst purveyors of bullshit are senior executives, especially successful CEOs (or CEOs of successful companies, which is not quite the same thing). Spicer seems to agree:
Calling out an underling’s piffle might be tough, but calling bullshit on the boss is usually impossible. Yet we also know that organisations that encourage people to speak up tend to retain their staff, learn more, and perform better. So how can you question your superiors’ bullshit without incurring their wrath? One study by Ethan Burris of the University of Texas at Austin provides some solutions. He found that it made a big difference how an employee went about posing the questions. ‘Challenging’ questions were met with punishment, while supportive questions received a fair hearing. So instead of bounding up to your boss and saying: ‘I can’t believe your bullshit,’ it would be a better idea to point out: ‘We might want to check what the evidence says, then tweak it a little to make it better.’
Good survival skills for work rebels.
Maybe I should have called this section NOT ON AI, since its really about companies that use people to augment AI-based systems, and the problems that can arise from that.
Lily Hay Newman offers a deep dive into the iffy security side effects of Expensify relying on Mechanical Turkers to review results of AI analysis of expense receipts. Jeffrey Bigham of Canegie Mellon says,
Every product that uses AI also uses people. I wouldn't even say it's a backstop so much as a core part of the process. People definitely believe their technology is powered only by AI when it seems intelligent, and there’s every incentive for the companies to perpetuate that myth.
Andrew Russell and Lee Vinsel take on the unimaginable: deflating the hype around innovation, and instead drawing attention to the prosaic but essential need for maintenance. Along the way they dethrone Schumpeter and Christensen, and all the droning on about innovation in corporations.
At the turn of the millennium, in the world of business and technology, innovation had transformed into an erotic fetish. Armies of young tech wizards aspired to become disrupters. The ambition to disrupt in pursuit of innovation transcended politics, enlisting liberals and conservatives alike. Conservative politicians could gut government and cut taxes in the name of spurring entrepreneurship, while liberals could create new programmes aimed at fostering research. The idea was vague enough to do nearly anything in its name without feeling the slightest conflict, just as long as you repeated the mantra: INNOVATION!! ENTREPRENEURSHIP!!
They note that innovation's shine has faded starting in the early 2000s, and the inescapable link between the madness of endless unyielding innovation and the resulting impacts on the Earth, society, and our skewed economics.
In their final analysis, they offer this [emphasis mine]:
There is an urgent need to reckon more squarely and honestly with our machines and ourselves. Ultimately, emphasising maintenance involves moving from buzzwords to values, and from means to ends. In formal economic terms, ‘innovation’ involves the diffusion of new things and practices. The term is completely agnostic about whether these things and practices are good. Crack cocaine, for example, was a highly innovative product in the 1980s, which involved a great deal of entrepreneurship (called ‘dealing’) and generated lots of revenue. Innovation! Entrepreneurship! Perhaps this point is cynical, but it draws our attention to a perverse reality: contemporary discourse treats innovation as a positive value in itself, when it is not.
Entire societies have come to talk about innovation as if it were an inherently desirable value, like love, fraternity, courage, beauty, dignity, or responsibility. Innovation-speak worships at the altar of change, but it rarely asks who benefits, to what end? A focus on maintenance provides opportunities to ask questions about what we really want out of technologies. What do we really care about? What kind of society do we want to live in? Will this help get us there? We must shift from means, including the technologies that underpin our everyday actions, to ends, including the many kinds of social beneficence and improvement that technology can offer. Our increasingly unequal and fearful world would be grateful.