Normally, watching as an activity is divided into three categories:

  1. A single audience watches single screen about one content;
  2. Multiple audiences watch single screen about one content;
  3. Multiple audiences watch multiple screens about one content;

But how about a single audience or multiple audiences watch multiple screens about multiple contents and experience one story? It’s the dynamic watching discussed here.

For instance, an immersive theatre or an exhibition of an outstanding man can offer a situation that an audience is going to experience multiple fragmentations or so-called plots, would finally own the panorama by intergrated plots. It’s not a dimensional story-telling based on time but a 4-dimensional situation offered to the audience.

It’s the idea I got from the feedback from Dr Eleanor Dare of the proposal of my final project in Royal College of Art.

Also, consider ways in which the imagery may be less predictable, can you show us a world no one has seen before? Can you create an aesthetic which does not replicate  visual cliches about the future we have all seen many times, I’d urge you to challenge yourself on that front, and get away from familiar modes of representation around the future, AI and the posthuman…

I also got suggestions from Ben Stopher. The Futures Cone really interests me. Futurists have often spoken and continue to speak of three main classes of futures: possible, probable, and preferable. These have at times lent themselves to define various forms of more specialised futures activity, with some futurists focusing on, as it were, exploring the possible; some on analysing the probable; and some on shaping the preferable, with many related variations on this nomenclature and phraseology (e.g., again, Amara 1991, and many others).  It is possible to expand upon this three-part taxonomy to include at least 7 (or even 8) major types of alternative futures. It is convenient to depict this expanded taxonomy of alternative futures as a ‘cone’ diagram. The ‘futures cone’ model was used to portray alternative futures by Hancock and Bezold (1994), and was itself based on a taxonomy of futures by Henchey (1978), wherein four main classes of future were discussed (possible, plausible, probable, preferable).

  • Potential – everything beyond the present moment is a potential future. This comes from the assumption that the future is undetermined and ‘open’ not inevitable or ‘fixed’, which is perhaps the foundational axiom of Futures Studies.
  • Preposterous – these are the futures we judge to be ‘ridiculous’, ‘impossible’, or that will ‘never’ happen. I introduced this category because the next category (which used to be the edge of the original form of the cone) did not seem big enough, or able to capture the sometimes-vehement refusal to even entertain them that some people would exhibit to some ideas about the future. This category arises from homage to James Dator and his Second Law of the Future—“any useful idea about the future should appear ridiculous” (Dator 2005)—as well as to Arthur C. Clarke and his Second Law—“the only way of finding the limits of the possible is by going beyond them into the impossible” (Clarke 2000, p. 2). Accordingly, the boundary between the Preposterous and the Possible could be reasonably called the ‘Clarke-Dator Boundary’ or perhaps the ‘Clarke-Dator Discontinuity’, since crossing it in the outward direction represents a very important but, for some people, very difficult, movement in prospection thinking. (This is what is represented by the red arrows in the diagram.)
  • Possible – these are those futures that we think ‘might’ happen, based on some future knowledge we do not yet possess, but which we might possess someday (e.g., warp drive).
  • Plausible – those we think ‘could’ happen based on our current understanding of how the world works (physical laws, social processes, etc).
  • Probable – those we think are ‘likely to’ happen, usually based on (in many cases, quantitative) current trends.
  • Preferable – those we think ‘should’ or ‘ought to’ happen: normative value judgements as opposed to the mostly cognitive, above. There is also of course the associated converse class—the un-preferred futures—a ‘shadow’ form of anti-normative futures that we think should not happen nor ever be allowed to happen (e.g., global climate change scenarios comes to mind).
  • Projected – the (singular) default, business as usual, ‘baseline’, extrapolated ‘continuation of the past through the present’ future. This single future could also be considered as being ‘the most probable’ of the Probable futures. And,
  • (Predicted) – the future that someone claims ‘will’ happen. I briefly toyed with using this category for a few years quite some time ago now, but I ended up not using it anymore because it tends to cloud the openness to possibilities (or, more usefully, the ‘preposter-abilities’!) that using the full Futures Cone is intended to engender.

This taxonomy finds its greatest utility when undertaking the Prospection phase of the Generic Foresight Process (Voros 2003) especially when the taxonomy is presented in reverse order from Projected to Preposterous. Here, one frames the extent to which the thinking is ‘opened out’ (implied by a reverse-order presentation of the taxonomy) by choosing a question form that is appropriate to the degree of openness required for the futures exploration. Thus, “what preposterously ‘impossible’ things might happen?” sets a different tone for prospection than the somewhat tamer question “what is projected to occur in the next 12 months?”

The Sci-fi film is getting boring in this period, when science and technology becomes unexpected and the distance between each milestones gets smaller and smaller. Most of the sci-fi films are talking about artificial intelligence, extraterrestrial intelligence and the end of the world, which is quite familiar to everyone. We live in such a minute-calculated world. It’s why there is someone starts the foundation The Long Now to provide a counterpoint to today’s accelerating culture and help make long-term thinking more common. The foundation is running a significant project names The 10,000 Year Clock.

In addition, there is also an interesting sample, Onkalo, which is a gigantic bunker has to last 100,000 years, built in Finland, 500 metres below the earth – supposedly impervious to any event on the surface and far away from any possible earthquake danger: its purpose is to house thousands of tonnes of radioactive nuclear waste.

What is the time especially such a super long-term one means to us, not only to a single human, but to the whole human beings?

By definition low probability events (sometimes referred to as ‘mini-scenarios’) that would have very large impact if they occurred (Petersen 1997, 1999). Since they are considered ‘low probability’ (i.e., outside the Probable zone), any member of any class of future outside the range of probable futures could be considered by definition a wildcard (although this usage is not common, as the focus tends to be on ‘high impact’ events).

So, in my project, the ideas are the realization of artificial intelligence, the accident caused by artificial intelligence, the transformation from human to cyborg and from the organic to the inorganic and from the cell to the electronic, which are the predicated future, at most, the preferable future. The main idea is about the right of the trans-human (which is defined as Chimera in my view), and mainly about the discrimination going to happen on non-human (which is commonly defined now). This might be the plausible future.

Under this framework, this project needs to go further, to step into the preposterous area.

So what is the ridiculous, impossible, never-happened future?


Reference:

In 2015, Google Photos labeled black people ‘gorillas’.

There is a short video features “Black Desi” and his colleague “White Wanda”, When Wanda, a white woman, is in front of the screen, the camera zooms to her face and moves as she moves. But when Desi, a black man, does the same, the camera does not respond by tracking him. The clip is light-hearted in tone but is titled “HP computers are racist”.

 

 

As the news “Facial recognition software is biased towards white men, researcher finds” said, Gender was misidentified in less than one percent of lighter-skinned males; in up to seven percent of lighter-skinned females; up to 12 percent of darker-skinned males; and up to 35 percent in darker-skinner females. And it’s hardly the first time that facial recognition technology has been proven inaccurate, but more and more evidence points towards the need for diverse data sets, as well as diversity among the people who create and deploy these technologies, in order for the algorithms to accurately recognize individuals regardless or race or other identifiers.

In Automated Graphic Design, it is said in the end:

Automation was looming in the early 2010s. But designers were too busy funding nostalgia on Kickstarter via good old Modernism. Trolling OS icons on Dribbble was more entertaining than debating and dealing with a political issue that would shape the way we now work, think and live. For most designers, it is all far too late.

Nearly three-quarters (73 percent) of US adults believe artificial intelligence will “eliminate more jobs than it creates,” according to a Gallup survey. But, the same survey found that less than a quarter (23 percent) of people were “worried” or “very worried” automation would affect them personally. Notably, these figures vary depending on education. For respondents with only a four-year college degree or less, 28 percent were worried about AI taking their job; for people with at least a bachelor degree, that figure was 15 percent.

One survey conducted by Quartz last year found that 90 percent of respondents thought that up to half of all jobs would be lost to automation in five years, but 91 percent said there was “no risk to my job.” Another study from the Pew Research Center in 2016 found the same: 65 percent of respondents said that 50 years from now automation would take over “much” of the work currently being done by humans, but 80 percent thought their own job would still exist in that time frame.

On the surface, these answers suggest complacency, ignorance, or short-sightedness, but they also reflect a deep divide among experts on what exactly the effects of new technology will have on the workplace.

Historically, though, it’s the cheerier scenario that’s been true: technology usually leads to a net gain in jobs, destroying some professions but creating new ones in the process. What’s different this time around, argue some economists and AI experts, is that machines are qualitatively smarter than they were in the past, and historical examples don’t offer a useful comparison. This stance is sometimes presented as a doomsday scenario in which AI and automation lead to mass unemployment.

These is a choice extract from an article ‘Most Americans think artificial intelligence will destroy other people’s jobs, not theirs’.

 

What is precariat?

In sociology and economics, the precariat is a social class formed by people suffering from precarity, which is a condition of existence without predictability or security, affecting material or psychological welfare. The term is a portmanteau obtained by merging precarious with proletariat. Unlike the proletariat class of industrial workers in the 20th century who lacked their own means of production and hence sold their labour to live, members of the precariat are only partially involved in labour and must undertake extensive “unremunerated activities that are essential if they are to retain access to jobs and to decent earnings”. Specifically, it is the condition of lack of job security, including intermittent employment or underemployment and the resultant precarious existence. The emergence of this class has been ascribed to the entrenchment of neoliberal capitalism.

The analysis of the results of the Great British Class Survey of 2013, a collaboration between the BBC and researchers from several UK universities, contended there is a new model of class structure consisting of seven classes: a wealthy “elite”; a prosperous salaried “middle class” consisting of professionals and managers; a class of technical experts; a class of ‘new affluent’ workers, and at the lower levels of the class structure, in addition to an ageing traditional working class, a ‘precariat’ characterised by very low levels of capital and lasting precarious economic security, and a group of emergent service workers.

 

There is the first of a three-part series exploring the effects of global capitalism on modern workers by Guy Standing, author of The Precariat:

1. The first faction consists of those who have fallen from old working-class communities or families. They feel they do not have what their parents or peers had. They may be called atavists, since they look backwards, feeling deprived of a real or imagined past. Not having much education, they listen to populist sirens who play on their fears and blame “the other” – migrants, refugees, foreigners, or some other group easily demonized. The atavists supported Brexit and have flocked to the far right everywhere. They will continue to go that way until a new progressive politics reaches out to them.

2. The second group are nostalgics. These consist of migrants and beleaguered minorities, who feel deprived of a present time, a home, a belonging. Recognizing their supplicant status, mostly they keep their heads down politically. But occasionally the pressures become too great and they explode in days of rage. It would be churlish to blame them.

3. The third faction is what I call progressives, since they feel deprived of a lost future. It consists of people who go to college, promised by their parents, teachers and politicians that this will grant them a career. They soon realize they were sold a lottery ticket and come out without a future and with plenty of debt. This faction is dangerous in a more positive way. They are unlikely to support populists. But they also reject old conservative or social democratic political parties. Intuitively, they are looking for a new politics of paradise, which they do not see in the old political spectrum or in such bodies as trade unions.

This is an innegligible fact that university students are facing fierce competitions in a harsher social environment, although they have been told there will be a promising future waiting for them and also have seen how their parents got success in the same way. Does the promising future become a commercial element in a variety of forms in front of the students? Do parents make a great misunderstanding about the situation?

Does universities become a Precariat Production Line produce precariat, as the education was compared to a factory produce exactly the same human beings to the society?

Nowadays, college students are always complaining about employment pressure and stress of life such as housing pressure, about the gap between reality and ideal which actually was promised by their parents, teachers and politicians, but never talking about if it should be such a common reality.

It is also the fact that the average salary is increasing continuously, everything seems to be going well.

But it must be questioned: Is this situation common?

The Elephant Development event what is happening now shows serverl facts that Of a planned 979 private homes, only 33 will be social rent affordable to the majority of people who live in the neighbourhood. That’s a staggering 3.3% of the total homes Delancey wants to build; On the topic of the treatment of the numerous local traders at the Shopping Centre, there are still only poor intentions about making sure there are robust and genuine offers of relocation in the area. Delancey seeks to throw money at this problem by offering a pissy £250,000 ‘towards a relocation fund’ but it’s not clear how many of the 70 or so businesses there will get this help; So-called ‘regeneration’ based on property development might economically increase a bit of council tax into the Council coffers but socially they actually increase poverty, isolation, ill health, anxiety and so on.

I start to think if it should not only be the negative result of education economy itself, but also the social environment produced by government, even if it is a commonplace and I used to avoid talking about this. As in the 1970s, the Nixon administration got a bill pushing for UBI through congress twice before being blocked by the Senate. It certainly seems the case when looking at the data taken from societies which have adopted UBI in the past. In 1974, the Canadian town of Dauphin gave everyone a guaranteed basic income so nobody fell below the poverty line, for four years. The data wasn’t analysed fully until 2009, but the findings showed that child school performance increased, hospitalisation went down and domestic violence was much reduced. It’s also been found that countries which have the shortest working weeks have the highest social capital – people not only volunteer more, they take more time for going to the theatre, for instance.

This is an article written by Gmma Milne and posted on ogilvy.com, it is said that “The definition of work is something we haven’t quite formalised as a society – if it’s about doing something useful, then surely volunteering or caring for children and the elderly should count. In the context of mass automation, if robots are to take away our employment then are we to move towards a society where the focus is more on ‘valuable’ work, leaving us to lead better lives?”

So it seems the solution should be the UBI(Unconditional Basic Income) that the basic life of graduate students is under protection. But, in my mind, is it going to reduce the value of being educated?

There is description about it in the article Precarity Pilot: Making Space for Socially- and Politically-engaged Design Practice, and it also offers the solution through a series of practices of Precarity Pilot:

There seems to be an open assumption within design education that designers should engage with pressing social and environmental issues. What became clear was that although designers and design education do not openly speak about it, within the creative industries most people are exposed to exhausting precarious working and living conditions, such as bulimic work patterns, long hours, poor pay, anxiety, psychological and physical stress, and lack of social protection (c.f. Elzenbaumer & Giuliani, 2014; Lorey, 2006;)

“We shape our tools and, thereafter, our tools shape us.” — John Culkin (1967)
“form follows function”—Louis Sullivan

Sullivan’s credo is still true, but maybe a new expression that is more in line with Digital Age is: function follows human needs, form follows human behavior.

As the situation of the philosophical thought experiment “If a tree falls in a forest and no one is around to hear it, does it make a sound?”, how should a tool exist when it’s unused? As the physical world, the body of the tool should exist, but how about the function?

The wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention.(Herbert Simon (1971)).


Tristan Harris, the co-founder of the nonprofit group Time Well Spent, calls

a world dominated by the race for attention.

According to Harris, an ex-design ethicist at Google who The Atlantic described as “the closest thing Silicon Valley has to a conscience,”

we all “live in a city called the attention economy.”

That’s what is shaping everything about contemporary life, Harris says, particularly our increasingly surreal politics.

So what to do? How to make this new community more habitable?

  1. All of us should recognize that we are all vulnerable and for us to all “curate our own lives”.
  2. The platform companies should recognize that their users have “vulnerable minds” and for them to make a conscious effort to avoid feeding our “lizard brains”.

Want to succeed in the Attention Economy—without losing your soul? Here’s how:

1. RIDE THE NEWS, AND THEN RISE ABOVE THE NOISE

A top executive at a large industrial company recently marveled to me that one of Fast Company‘s competitors was tweeting more than 150 times a day; I didn’t have the heart to admit that we weren’t much more restrained. To get people to engage with content, you need to be in front of them. Even more, you need to be constantly assessing what others are doing, and adjusting your tactics and your output in real time. While data can be useful, the critical factor if you want to separate yourself is organizational metabolism: rapid, streamlined decision making. The most distinctive, impactful content requires taking some risks without losing sight of your north star. As Anthony Bourdain, host of the hit travel and food series Parts Unknownsays in this issue, “What’s good for you in the short run is not necessarily good for you in the long run.”

2. CREATE EMOTION

Controversy is one way to gain attention: Brashly challenge convention. (You might call this the Trump Doctrine.) Violence and sex works, too. But such engagement is often shallow and short-lived. To understand what anchors deeper, longer-lasting connection, recall how Hamilton became a global phenomenon. Creator Lin-Manuel Miranda used a 200-year-old story to tap into core human truths. The best creative linking of ideas and feelings looks effortless, but it is an art. The best advertising has always done this, whatever the medium. Giphy’s success is built on enabling our creative expression, providing just the right, nuanced image to capture a mood.

3. USE WHAT’S NEW

Once upon a time, we made fun of hapless smartphone users who forgot to turn their cameras to landscape perspective before making videos. And then Snapchat turned those “mistakes” into a new, booming format. From GIFs to chatbots, the tools available continue to proliferate. Even what’s old has become new, like audio, thanks to podcasts and digital assistants like Alexa and Siri.

4. BELIEVE IN SOMETHING

If you’re going to streamline decision making, take creative risks, and connect emotionally using new tools, it helps to have guiding principles. That’s not just about style guides and preferred, brand-appropriate words and images. It’s about clarity of purpose, the mission of your enterprise, what it is you really want to get done. A great example of this can be found in our first-person story about Trader Joe’s work culture from the latest issue. The word “authentic” gets thrown around, as if it’s something to be managed (or, in the worst cases, manufactured). But there’s no substitute for actually believing in something. Whether we kneel, link arms, or stand on the sidelines, our actions reveal who we really are.

Red Dots

New and flourishing modes of socialization amount, in the most abstract terms, to the creation and reduction of dots, and the experience of their attendant joys and anxieties. Dots are deceptively, insidiously simple: They are either there or they’re not; they contain a number, and that number has a value. But they imbue whatever they touch with a spirit of urgency, reminding us that behind each otherwise static icon is unfinished business. They don’t so much inform us or guide us as correct us: You’re looking there, but you should be looking here. They’re a lawn that must be mowed. Boils that must be lanced, or at least scabs that itch to be picked. They’re Bubble Wrap laid over your entire digital existence.

Late last year, a red badge burbled to the surface next to millions of iPhone users’ Settings apps. It looked as though it might be an update, but it turned out to be a demand: Finish adding your credit card to Apple Pay, or the dot stays put. Apple might as well have said: Give us your credit card number, or we will annoy you until you do.

“The thing that people, I think, don’t appreciate right now is that they are already a cyborg. You’re already a different creature than you would have been 20 years ago, or even 10 years ago. You can see this when they do surveys of like, ‘how long do you want to be away from your phone?’ and – particularly if you’re a teenager or in your 20s – even a day hurts. If you leave your phone behind, it’s like missing limb syndrome. I think people – they’re already kind of merged with their phone and their laptop and their applications and everything.“

So Neuralink would end up taking all the external devices that make us a cyborg, and put them straight into our brain.

Dan Brown said “Transhumanism is the ethics and science of using things like biological and genetic engineering to transform our bodies and make us a more powerful species”.

The History of ‘Transhumanism’ 

Human enhancement technologies (HET)

Emerging technologies[edit]

Speculative technologies[edit]

  • Mind uploading, the hypothetical process of “transferring”/”uploading” or copying a conscious mind from a brain to a non-biological substrate by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device.
  • Exocortex, a theoretical artificial external information processing system that would augment a brain’s biological high-level cognitive processes.
  • Endogenous artificial nutrition, such as having a radioisotope generator that resynthesizes glucose (similarly to photosynthesis), amino acids and vitamins from their degradation products, theoretically availing for weeks without food if necessary.

 

博尔斯托夫和越来越多的研究者认为,虚拟空间的经历可以反映出玩家线下生活里的状态。博尔斯托夫进一步补充说,人类所经历的一部分生活总是虚拟的。他在写道:“由于通过文化的棱镜体验生活是人类的本性,因此人类自身就具备虚拟性。

何为人类的虚拟性【校对注:virtuality,马塞尔·普鲁斯特给出的定义是:真实但不存在,理想但不抽象(real but not actual, ideal but not abstract)】?博尔斯托夫认为,人们通过共识的符号体验虚拟生活,比如神和宗教。罗伯特·怀特(Robert Wright)在其著作《上帝的进化》(The Evolution of God)一书中,针对这一现象的发展提出过一种理论。他认为长期以来被当作权力工具的宗教,正被法律法典所替代。

 

Drugs, Rape, Massacres: How AI Is Exposing Children to the Worst of Humanity

Totsploitation is in a sense grounded in the same grand tendencies in the history of innovation in the arts. By recombining motifs associated with “pure” children’s entertainment and themes connected with adult situations, the artist provokes a reaction not unlike that invited by Manet’s Olympia. The unconventional brushwork, sloppy from a neoclassical perspective, is even paralleled by the sketchy and unfinished quality of these videos. But while Manet explored the fine line separating pornography and portraiture in nude images of women branded “Venus,” the modern Totsploitation artist elicits a shocked response by dwelling on the ambiguous boundaries that distinguish different types of children’s entertainment.

Totsploitation’s closest cousin is another genre featuring mashups of stock cartoon characters, jump cuts, starling soundtracks, and disturbing hypnotic juxtapositions of kid-friendly content with adult violence: the unfortunately named Youtube Poop. It involves “absurdist remixes that ape and mock the lowest technical and aesthetic standards of remix culture to comment on remix culture itself.” Hurricoaster’s The Sky Had a Weegee is a classic example of the genre, incorporating clips from Spongebob Squarepants and the horrendous DOS version of Mario is Missing. Trajce Cvetkovski drew attention to the video when he wrote about it in 2013 because Viacom claimed a copyright infringement against it. Now, it has over 16 million views on Youtube. One of these days, Disney or Marvel might just immortalize a Totspolitation classic in the same way.

肢體語言的邊界與跨越

當你拿起剪刀把一本小說裡的字句這一點那一點地剪掉,不等於你能把文字的意義拆散,反而,新的語言會產生。讀著那剛剛出生的語言,你可能會覺得有點奇怪,但你也可能漸漸會明白。不明覺厲的樂趣,重點在於不明,那種什麼也不明白,只是被天才拉著手往前跑的感覺,非常愉快。但願大家總有一天都能感受到。

The Joy of Writing

Why does this written doe bound through these written woods?
For a drink of written water from a spring
whose surface will xerox her soft muzzle?
Why does she lift her head; does she hear something?
Perched on four slim legs borrowed from the truth,
she pricks up her ears beneath my fingertips.
Silence – this word also rustles across the page
and parts the boughs
that have sprouted from the word “woods.”

Lying in wait, set to pounce on the blank page,
are letters up to no good,
clutches of clauses so subordinate
they’ll never let her get away.

Each drop of ink contains a fair supply
of hunters, equipped with squinting eyes behind their sights,
prepared to swarm the sloping pen at any moment,
surround the doe, and slowly aim their guns.

They forget that what’s here isn’t life.
Other laws, black on white, obtain.
The twinkling of an eye will take as long as I say,
and will, if I wish, divide into tiny eternities,
full of bullets stopped in mid-flight.
Not a thing will ever happen unless I say so.
Without my blessing, not a leaf will fall,
not a blade of grass will bend beneath that little hoof’s full stop.

Is there then a world
where I rule absolutely on fate?
A time I bind with chains of signs?
An existence become endless at my bidding?

The joy of writing.
The power of preserving.
Revenge of a mortal hand.

By Wislawa Szymborska
From “No End of Fun”, 1967
Translated by S. Baranczak & C. Cavanag

Dance Ink – Vol. 8 No. 2