Generally, there is a lot of projection mapping software such as heavyM and Madmapper which are designer-friendly. But in the practice, I realize they are not the most suitable. For instance, if the work needs only a suitable angle to show a short film, but for 4 days, and 7 hours per day, I can’t leave my laptop there for such a simple case for such a long time. Although MadMapper offers a solution named MiniMad which is a media box can supply MadMapper’s file, can help with the case, I don’t think it is valuable.

Let’s go to the point. After Effect can bring both simple and high-quality results in such cases.

In this tutorial, I used Adobe After Effect CC 2017 and a mini HD Media Box and a BenQ TH682ST Projector.

  1. Check the output of the media box and projector. Make sure the Media Box has the same resolution as the projector. In this case, my projector is Full HD 1920* 1080, so I have tried setting my media box to 1080i 50Hz, 1080i 60Hz, 1080p 50Hz , and finally, I used 1080p 60Hz, only because it works on the projector.
  2. Since I know my projector can run a 1920*1080 video, in After Effect, I create a 1920*1080 component to put everything I wanna show in the projection mapping inside.
  3. Last but most importantly, in After Effect, use Viewer-New Viewer to create a new window of the preview and drag it to the projector’s screen.
  4. Use the short-key command+\ to have the full-screen view.
  5. Now, it is ready to make the projection mapping. On the Viewer of the laptop’s screen, I only use the effect “Distort – Corner Pin” to adjust every element in the right position.
  6. After animating, EXPORT the animation by clicking Composition>Add to Render Queue. (A good video format that can easily be played back at good quality whilst being a small size (faster to play) is QuickTime with the H264 codec.)
  7. Finally, play your exported video full screen on the second screen (videoprojector) using a video player like QuickTime Player or VLC.

 

Normally, watching as an activity is divided into three categories:

  1. A single audience watches single screen about one content;
  2. Multiple audiences watch single screen about one content;
  3. Multiple audiences watch multiple screens about one content;

But how about a single audience or multiple audiences watch multiple screens about multiple contents and experience one story? It’s the dynamic watching discussed here.

For instance, an immersive theatre or an exhibition of an outstanding man can offer a situation that an audience is going to experience multiple fragmentations or so-called plots, would finally own the panorama by intergrated plots. It’s not a dimensional story-telling based on time but a 4-dimensional situation offered to the audience.

It’s the idea I got from the feedback from Dr Eleanor Dare of the proposal of my final project in Royal College of Art.

Also, consider ways in which the imagery may be less predictable, can you show us a world no one has seen before? Can you create an aesthetic which does not replicate  visual cliches about the future we have all seen many times, I’d urge you to challenge yourself on that front, and get away from familiar modes of representation around the future, AI and the posthuman…

I also got suggestions from Ben Stopher. The Futures Cone really interests me. Futurists have often spoken and continue to speak of three main classes of futures: possible, probable, and preferable. These have at times lent themselves to define various forms of more specialised futures activity, with some futurists focusing on, as it were, exploring the possible; some on analysing the probable; and some on shaping the preferable, with many related variations on this nomenclature and phraseology (e.g., again, Amara 1991, and many others).  It is possible to expand upon this three-part taxonomy to include at least 7 (or even 8) major types of alternative futures. It is convenient to depict this expanded taxonomy of alternative futures as a ‘cone’ diagram. The ‘futures cone’ model was used to portray alternative futures by Hancock and Bezold (1994), and was itself based on a taxonomy of futures by Henchey (1978), wherein four main classes of future were discussed (possible, plausible, probable, preferable).

  • Potential – everything beyond the present moment is a potential future. This comes from the assumption that the future is undetermined and ‘open’ not inevitable or ‘fixed’, which is perhaps the foundational axiom of Futures Studies.
  • Preposterous – these are the futures we judge to be ‘ridiculous’, ‘impossible’, or that will ‘never’ happen. I introduced this category because the next category (which used to be the edge of the original form of the cone) did not seem big enough, or able to capture the sometimes-vehement refusal to even entertain them that some people would exhibit to some ideas about the future. This category arises from homage to James Dator and his Second Law of the Future—“any useful idea about the future should appear ridiculous” (Dator 2005)—as well as to Arthur C. Clarke and his Second Law—“the only way of finding the limits of the possible is by going beyond them into the impossible” (Clarke 2000, p. 2). Accordingly, the boundary between the Preposterous and the Possible could be reasonably called the ‘Clarke-Dator Boundary’ or perhaps the ‘Clarke-Dator Discontinuity’, since crossing it in the outward direction represents a very important but, for some people, very difficult, movement in prospection thinking. (This is what is represented by the red arrows in the diagram.)
  • Possible – these are those futures that we think ‘might’ happen, based on some future knowledge we do not yet possess, but which we might possess someday (e.g., warp drive).
  • Plausible – those we think ‘could’ happen based on our current understanding of how the world works (physical laws, social processes, etc).
  • Probable – those we think are ‘likely to’ happen, usually based on (in many cases, quantitative) current trends.
  • Preferable – those we think ‘should’ or ‘ought to’ happen: normative value judgements as opposed to the mostly cognitive, above. There is also of course the associated converse class—the un-preferred futures—a ‘shadow’ form of anti-normative futures that we think should not happen nor ever be allowed to happen (e.g., global climate change scenarios comes to mind).
  • Projected – the (singular) default, business as usual, ‘baseline’, extrapolated ‘continuation of the past through the present’ future. This single future could also be considered as being ‘the most probable’ of the Probable futures. And,
  • (Predicted) – the future that someone claims ‘will’ happen. I briefly toyed with using this category for a few years quite some time ago now, but I ended up not using it anymore because it tends to cloud the openness to possibilities (or, more usefully, the ‘preposter-abilities’!) that using the full Futures Cone is intended to engender.

This taxonomy finds its greatest utility when undertaking the Prospection phase of the Generic Foresight Process (Voros 2003) especially when the taxonomy is presented in reverse order from Projected to Preposterous. Here, one frames the extent to which the thinking is ‘opened out’ (implied by a reverse-order presentation of the taxonomy) by choosing a question form that is appropriate to the degree of openness required for the futures exploration. Thus, “what preposterously ‘impossible’ things might happen?” sets a different tone for prospection than the somewhat tamer question “what is projected to occur in the next 12 months?”

The Sci-fi film is getting boring in this period, when science and technology becomes unexpected and the distance between each milestones gets smaller and smaller. Most of the sci-fi films are talking about artificial intelligence, extraterrestrial intelligence and the end of the world, which is quite familiar to everyone. We live in such a minute-calculated world. It’s why there is someone starts the foundation The Long Now to provide a counterpoint to today’s accelerating culture and help make long-term thinking more common. The foundation is running a significant project names The 10,000 Year Clock.

In addition, there is also an interesting sample, Onkalo, which is a gigantic bunker has to last 100,000 years, built in Finland, 500 metres below the earth – supposedly impervious to any event on the surface and far away from any possible earthquake danger: its purpose is to house thousands of tonnes of radioactive nuclear waste.

What is the time especially such a super long-term one means to us, not only to a single human, but to the whole human beings?

By definition low probability events (sometimes referred to as ‘mini-scenarios’) that would have very large impact if they occurred (Petersen 1997, 1999). Since they are considered ‘low probability’ (i.e., outside the Probable zone), any member of any class of future outside the range of probable futures could be considered by definition a wildcard (although this usage is not common, as the focus tends to be on ‘high impact’ events).

So, in my project, the ideas are the realization of artificial intelligence, the accident caused by artificial intelligence, the transformation from human to cyborg and from the organic to the inorganic and from the cell to the electronic, which are the predicated future, at most, the preferable future. The main idea is about the right of the trans-human (which is defined as Chimera in my view), and mainly about the discrimination going to happen on non-human (which is commonly defined now). This might be the plausible future.

Under this framework, this project needs to go further, to step into the preposterous area.

So what is the ridiculous, impossible, never-happened future?


Reference:

Normally, Linux system should be easily connect to any wifi, but Eduroam network has made it a little bit difficult. Due to my project which needs wifi to get good reaction with Google Assistant API, I try to connect to Royal College of Art’s Eduroam network.

I have tried to just install Eduroam installer for Linux in https://cat.eduroam.org/, but actually it always prompted me that the process is failed. So I tried another method which you just need to add some codes in the wpa_supplicant file. The file should be in /etc/wpa_supplicant/wpa_supplicant.conf

What you need to do is

  1. Copy a new wpa_supplicant/wpa_supplicant.conf in the Desktop;
  2. Open the file with Text Editor;
  3. Add the following codes and save;
  4. Open Terminal and use “sudo mv ‘your new conf file’ /etc/wpa_supplicant/wpa_supplicant.conf”, you need to replace the whole ‘your new conf file’ with the path;
  5. Reboot the system;
  6. Done.
network={

                 ssid="eduroam" 

                 scan_ssid=1

                 key_mgmt=WPA-EAP 

                 eap=PEAP              

                 identity="youridentity@youridentitydomain" 

                 password="yourpassword" 

                 phase1="peaplabel=0" 

                 phase2="auth=MSCHAPV2" 

}

Parallax effect is widely used in web design, which makes two-dimensional images with three-dimensional effect. This function is going to be used on my RCA group project for a comic about Precariat.

Here is an impressive example named Protanopia which uses parallax effect well:

The C# Script to realize it mainly uses transform.position with Input.mousePosition to interact.

Feb-23-2018 15-01-50.gif

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class parallaxEffect : MonoBehaviour {
    public float Margin;
    public float Layer;
    float x;
    float y;
    float Easing = 0.2f;
    Vector3 pos;
    void Start () {
        pos = transform.position;
    }
    
    void Update () {
        float targetX = Input.mousePosition.x – Screen.width / 2f;
        float dx = targetX – x;
        x += dx * Easing;
        float targetY = Input.mousePosition.y – Screen.height / 2f;
        float dy = targetY – y;
        y += dy * Easing;
        Vector3 direction = new Vector3(xy0f);
        Vector3 depth = new Vector3(0f0fLayer);
        this.transform.position = pos – direction/500f * Margin + depth;
    }
}

 

There are 2 ways to make mouse interaction in Unity’s First Person Controller:

  1. Make cursor visible;
  2. Use scripting: Raycasters.

1.Make cursor visible

To achieve this, you only need to switch off “Lock Cursor” in the Inspector of FPSController. After you can see the cursor, you will find the delay feeling of the motion of camera and the motion of cursor. To solve this, you need to adjust “X Sensitivity” and “Y Sensitivity”, normally 8 and 8 is good.

屏幕快照 2018-02-20 下午11.41.56

Then you can set up any “void OnMouseDown () {}” script function.

As the example of changing scene, the C# script will be this:

using UnityEngine;
using UnityEngine.SceneManagement;
public class mouseTester : MonoBehaviour
{
    bool doChange;
    void Start()
    {
        doChange = false;
    }
    void Update()
    {
        sceneChange();
    }
    void OnMouseDown()
    {
        doChange = !doChange;
    }
    void sceneChange()
    {
        if (doChange)
        {
            SceneManager.LoadScene(scene2);
        }
    }
}

2.Raycasters

To achieve this, you have to first understand what a Raycaster is in Unity. Here I find a great definition:

ray is a mathematical device that starts at an origin point and continues on in a specific direction forever. With a raycast you’re casting a ray, cast being used like the word throw. It’s like if you threw a rock and it continued on in that direction forever, it wouldn’t stop until it hit something. You’re interested as to whether it hit an object and what that object was.

Blocking objects would be the objects that you specifically define your raycast to be able to hit.
blocking mask allows you to, instead of passing tons of blocking objects, define the layers in Unity you want your raycast to be able to hit in the form of a bitmask. The bitmask is coded so each bit represents a layer. If your bitmask was 0000000000000101 represented as (1 | 1<<2) or 5then your raycast will only be blocked by layers 1 and 3 and can therefore only hit objects in those layers.

In this sample, you are able to cast a red ray, and when you look at an object (in other words, the ray is casting on an object), in the same time, if you press left click you the scene will be changed to scene2.

Feb-21-2018 00-51-22.gif

  1. Drag the script on “FirstPersonCharacter”;
  2. Adjust “Ray Length” in Inspector of “FirstPersonCharacter”;
  3. Make sure the Tag and Name of target object are same as those in the script, in the example, they are “Interactive” and “Cube”;屏幕快照 2018-02-21 上午12.58.47.png
  4. Create a new scene, and the name of it should be same as it in the script, in the example, it is “scene2”;
  5. In the “File>Build Setting”, you need to use “Add Open Scenes” to include both scenes;
  6. Done.
using System.Collections;   
using UnityEngine;
using UnityEngine.SceneManagement;

public class RaySceneChange : MonoBehaviour {
    //RaycastHit defines the name of ray
    private RaycastHit vision;
    public float rayLength;
    bool doChange;
    void Start()
    {
        doChange = false;
    }
    void Update()
    {
        //Casting a red ray. only for understanding how Raycasters work, you can delect this sentence
        //The first two elements in brackets define the direction of the ray. In this example, the direction is in the middle of the screen.
        Debug.DrawRay(Camera.main.transform.position, Camera.main.transform.forward * rayLength, Color.red, 0.5f);
        sceneChange();
        if (Physics.Raycast(Camera.main.transform.position, Camera.main.transform.forward, out vision, rayLength))
        {
            //Only the object with the defined Tag can be interactived, which is in the Inspector of object
            if (vision.collider.tag == Interactive)
            {
                //nly the object with the defined Name can be interactived, which is in the Inspector of object
                if (vision.collider.name == Cube)
                {
                    //GetMouseButtonDown: 0 = left click; 1 = right click; 2 = middle click)
                    if (Input.GetMouseButtonDown(0))
                    {
                        doChange = !doChange;
                    }
                }
            }
        }
    }
    void sceneChange()
    {
        if (doChange)
        {
            SceneManager.LoadScene(scene2);
        }
    }
}

I also find a great sample here, which allows audience hold the object by pressing E when look at it.

Feb-21-2018 01-03-14.gif

Here is the C# script. You can just replace the last one with it. The process is the same.

using UnityEngine;
using System.Collections;
public class RayCastExample : MonoBehaviour
{
    private RaycastHit vision;
    public float rayLength;
    private bool isGrabbed;
    private Rigidbody grabbedObject;
    void Start()
    {
        isGrabbed = false;
    }
    void Update()
    {
        Debug.DrawRay(Camera.main.transform.position, Camera.main.transform.forward * rayLength, Color.red, 0.5f);
        if(Physics.Raycast(Camera.main.transform.position, Camera.main.transform.forward, out visionrayLength))
        {
            if(vision.collider.tag == Interactive)
            {
                Debug.Log(vision.collider.name);
                if(Input.GetKeyDown(KeyCode.E)&& !isGrabbed)
                {
                    grabbedObject = vision.rigidbody;
                    grabbedObject.isKinematic = true;
                    grabbedObject.transform.SetParent(gameObject.transform);
                    isGrabbed = true;
                }
                else if(isGrabbed && Input.GetKeyDown(KeyCode.E))
                {
                    grabbedObject.transform.parent = null;
                    grabbedObject.isKinematic = false;
                    isGrabbed = false;
                }
            }
        }
    }
}

In 2015, Google Photos labeled black people ‘gorillas’.

There is a short video features “Black Desi” and his colleague “White Wanda”, When Wanda, a white woman, is in front of the screen, the camera zooms to her face and moves as she moves. But when Desi, a black man, does the same, the camera does not respond by tracking him. The clip is light-hearted in tone but is titled “HP computers are racist”.

 

 

As the news “Facial recognition software is biased towards white men, researcher finds” said, Gender was misidentified in less than one percent of lighter-skinned males; in up to seven percent of lighter-skinned females; up to 12 percent of darker-skinned males; and up to 35 percent in darker-skinner females. And it’s hardly the first time that facial recognition technology has been proven inaccurate, but more and more evidence points towards the need for diverse data sets, as well as diversity among the people who create and deploy these technologies, in order for the algorithms to accurately recognize individuals regardless or race or other identifiers.

There is an ARKit demonstration named Portal created by Japanese developer Kei Wakizuka. By touching screen, the portal will appear in the real space in the screen with correct perspective and space distortion effect. Through the portal, there is a digital world, but is an immersive one because the audience can also enter the world, and the real world will be behind the portal.

Now there is a variety of similar projects, and most of them are named as Dokodemo Door which is a magical tool in the Japanese manga Doraemon. In the manga, by the Dokodemo Door, Doraemon and his friends can go to anywhere.

Most Dokodemo Door projects are based on SLAM technology, for example, ARKit, the reason is if the audience use Image-Recognization-based AR technology, it is not so easy to enter into the digital world because the audience has to hold the device aiming on the image which will be a terrible experience. But I will keep my experience on both technology, there must be more than one possibility  as Dokodemo Door.

My experiment is based on Vuforia and ARKit, here are two results, and you may see the difference of two design logics:

Feb-07-2018 21-31-11.gif

To create the negative space is the basic step of this kind of AR experience. Now, I only find one method using DepthMask shader. The idea is “a mask object using the Depth Mask shader. This object will be drawn just after regular opaque objects, and will prevent subsequent objects from being drawn behind it.”

1.Based on Vuforia

1.Components

  1. A target image, which is only necessary toImagine-Recognization-based AR, and it should be a little bit lower than the hole;
  2. A closed 3D object with a hole (I created in C4D) which is used as the entrance for audiences, in my example, it is a cylinder. DepthMask.shader should affect this closed space, to make it as an invisibility cloak to hide other component inside;
  3. An inner box with out top side, it will be the most important part, and may have better construction method if you wanna realize a virtual world, but which is not necessary in my example;
  4. A ball with ShowInside.shader.
未标题-1

2.Shader

1.DepthMask.shader

 Shader "Custom/DepthMask" {
  
     SubShader {
         // Render the mask after regular geometry, but before masked geometry and
         // transparent things.
  
         Tags {"Queue" = "Geometry-10" }
  
         // Don't draw in the RGBA channels; just the depth buffer
  
         ColorMask 0
         ZWrite On
  
         // Do nothing specific in the pass:
  
         Pass {}
     }
 }

2.ShowInside.shader, here is the documentation from Unity about how to cull front or back (normally, it is default to cull the back, so you can’t see anything inside a 3D object).

 Shader "Custom/ShowInside" {
     Properties{
         _Color("Main Color"Color) = (1,1,1,1)
         _MainTex("Base (RGB)"2D) = "white" {} 
     }
 
         SubShader{
         Tags"RenderType" = "Opaque" }  
         LOD 100
 
         Pass{
         Cull Front    
         Lighting Off
         SetTexture[_MainTex]{ combine texture }
         SetTexture[_MainTex]
             {
             ConstantColor[_Color]
             Combine Previous * Constant
             }
         }
     }
 }

2.Based on ARKit

Components

  1. 2 closed objects with holes in the same direction and with different sizes: the outside one should be a little bit larger than the inside one, as the outside one’s function is to hide the inside one by DepthMask.shader, and the skybox material should be apply on the inside one to create a scene with depth of field;
  2. A doorframe for the hole to prevent the edge from being too shape. It can be a real doorframe with high quality texture or a black hole with dynamic shader;
  3. A scene with several objects to create spatial sense of hierarchy.
未标题-1

In Automated Graphic Design, it is said in the end:

Automation was looming in the early 2010s. But designers were too busy funding nostalgia on Kickstarter via good old Modernism. Trolling OS icons on Dribbble was more entertaining than debating and dealing with a political issue that would shape the way we now work, think and live. For most designers, it is all far too late.

Nearly three-quarters (73 percent) of US adults believe artificial intelligence will “eliminate more jobs than it creates,” according to a Gallup survey. But, the same survey found that less than a quarter (23 percent) of people were “worried” or “very worried” automation would affect them personally. Notably, these figures vary depending on education. For respondents with only a four-year college degree or less, 28 percent were worried about AI taking their job; for people with at least a bachelor degree, that figure was 15 percent.

One survey conducted by Quartz last year found that 90 percent of respondents thought that up to half of all jobs would be lost to automation in five years, but 91 percent said there was “no risk to my job.” Another study from the Pew Research Center in 2016 found the same: 65 percent of respondents said that 50 years from now automation would take over “much” of the work currently being done by humans, but 80 percent thought their own job would still exist in that time frame.

On the surface, these answers suggest complacency, ignorance, or short-sightedness, but they also reflect a deep divide among experts on what exactly the effects of new technology will have on the workplace.

Historically, though, it’s the cheerier scenario that’s been true: technology usually leads to a net gain in jobs, destroying some professions but creating new ones in the process. What’s different this time around, argue some economists and AI experts, is that machines are qualitatively smarter than they were in the past, and historical examples don’t offer a useful comparison. This stance is sometimes presented as a doomsday scenario in which AI and automation lead to mass unemployment.

These is a choice extract from an article ‘Most Americans think artificial intelligence will destroy other people’s jobs, not theirs’.

 

What is precariat?

In sociology and economics, the precariat is a social class formed by people suffering from precarity, which is a condition of existence without predictability or security, affecting material or psychological welfare. The term is a portmanteau obtained by merging precarious with proletariat. Unlike the proletariat class of industrial workers in the 20th century who lacked their own means of production and hence sold their labour to live, members of the precariat are only partially involved in labour and must undertake extensive “unremunerated activities that are essential if they are to retain access to jobs and to decent earnings”. Specifically, it is the condition of lack of job security, including intermittent employment or underemployment and the resultant precarious existence. The emergence of this class has been ascribed to the entrenchment of neoliberal capitalism.

The analysis of the results of the Great British Class Survey of 2013, a collaboration between the BBC and researchers from several UK universities, contended there is a new model of class structure consisting of seven classes: a wealthy “elite”; a prosperous salaried “middle class” consisting of professionals and managers; a class of technical experts; a class of ‘new affluent’ workers, and at the lower levels of the class structure, in addition to an ageing traditional working class, a ‘precariat’ characterised by very low levels of capital and lasting precarious economic security, and a group of emergent service workers.

 

There is the first of a three-part series exploring the effects of global capitalism on modern workers by Guy Standing, author of The Precariat:

1. The first faction consists of those who have fallen from old working-class communities or families. They feel they do not have what their parents or peers had. They may be called atavists, since they look backwards, feeling deprived of a real or imagined past. Not having much education, they listen to populist sirens who play on their fears and blame “the other” – migrants, refugees, foreigners, or some other group easily demonized. The atavists supported Brexit and have flocked to the far right everywhere. They will continue to go that way until a new progressive politics reaches out to them.

2. The second group are nostalgics. These consist of migrants and beleaguered minorities, who feel deprived of a present time, a home, a belonging. Recognizing their supplicant status, mostly they keep their heads down politically. But occasionally the pressures become too great and they explode in days of rage. It would be churlish to blame them.

3. The third faction is what I call progressives, since they feel deprived of a lost future. It consists of people who go to college, promised by their parents, teachers and politicians that this will grant them a career. They soon realize they were sold a lottery ticket and come out without a future and with plenty of debt. This faction is dangerous in a more positive way. They are unlikely to support populists. But they also reject old conservative or social democratic political parties. Intuitively, they are looking for a new politics of paradise, which they do not see in the old political spectrum or in such bodies as trade unions.

This is an innegligible fact that university students are facing fierce competitions in a harsher social environment, although they have been told there will be a promising future waiting for them and also have seen how their parents got success in the same way. Does the promising future become a commercial element in a variety of forms in front of the students? Do parents make a great misunderstanding about the situation?

Does universities become a Precariat Production Line produce precariat, as the education was compared to a factory produce exactly the same human beings to the society?

Nowadays, college students are always complaining about employment pressure and stress of life such as housing pressure, about the gap between reality and ideal which actually was promised by their parents, teachers and politicians, but never talking about if it should be such a common reality.

It is also the fact that the average salary is increasing continuously, everything seems to be going well.

But it must be questioned: Is this situation common?

The Elephant Development event what is happening now shows serverl facts that Of a planned 979 private homes, only 33 will be social rent affordable to the majority of people who live in the neighbourhood. That’s a staggering 3.3% of the total homes Delancey wants to build; On the topic of the treatment of the numerous local traders at the Shopping Centre, there are still only poor intentions about making sure there are robust and genuine offers of relocation in the area. Delancey seeks to throw money at this problem by offering a pissy £250,000 ‘towards a relocation fund’ but it’s not clear how many of the 70 or so businesses there will get this help; So-called ‘regeneration’ based on property development might economically increase a bit of council tax into the Council coffers but socially they actually increase poverty, isolation, ill health, anxiety and so on.

I start to think if it should not only be the negative result of education economy itself, but also the social environment produced by government, even if it is a commonplace and I used to avoid talking about this. As in the 1970s, the Nixon administration got a bill pushing for UBI through congress twice before being blocked by the Senate. It certainly seems the case when looking at the data taken from societies which have adopted UBI in the past. In 1974, the Canadian town of Dauphin gave everyone a guaranteed basic income so nobody fell below the poverty line, for four years. The data wasn’t analysed fully until 2009, but the findings showed that child school performance increased, hospitalisation went down and domestic violence was much reduced. It’s also been found that countries which have the shortest working weeks have the highest social capital – people not only volunteer more, they take more time for going to the theatre, for instance.

This is an article written by Gmma Milne and posted on ogilvy.com, it is said that “The definition of work is something we haven’t quite formalised as a society – if it’s about doing something useful, then surely volunteering or caring for children and the elderly should count. In the context of mass automation, if robots are to take away our employment then are we to move towards a society where the focus is more on ‘valuable’ work, leaving us to lead better lives?”

So it seems the solution should be the UBI(Unconditional Basic Income) that the basic life of graduate students is under protection. But, in my mind, is it going to reduce the value of being educated?

There is description about it in the article Precarity Pilot: Making Space for Socially- and Politically-engaged Design Practice, and it also offers the solution through a series of practices of Precarity Pilot:

There seems to be an open assumption within design education that designers should engage with pressing social and environmental issues. What became clear was that although designers and design education do not openly speak about it, within the creative industries most people are exposed to exhausting precarious working and living conditions, such as bulimic work patterns, long hours, poor pay, anxiety, psychological and physical stress, and lack of social protection (c.f. Elzenbaumer & Giuliani, 2014; Lorey, 2006;)