Normally, Linux system should be easily connect to any wifi, but Eduroam network has made it a little bit difficult. Due to my project which needs wifi to get good reaction with Google Assistant API, I try to connect to Royal College of Art’s Eduroam network.

I have tried to just install Eduroam installer for Linux in https://cat.eduroam.org/, but actually it always prompted me that the process is failed. So I tried another method which you just need to add some codes in the wpa_supplicant file. The file should be in /etc/wpa_supplicant/wpa_supplicant.conf

What you need to do is

  1. Copy a new wpa_supplicant/wpa_supplicant.conf in the Desktop;
  2. Open the file with Text Editor;
  3. Add the following codes and save;
  4. Open Terminal and use “sudo mv ‘your new conf file’ /etc/wpa_supplicant/wpa_supplicant.conf”, you need to replace the whole ‘your new conf file’ with the path;
  5. Reboot the system;
  6. Done.
network={

                 ssid="eduroam" 

                 scan_ssid=1

                 key_mgmt=WPA-EAP 

                 eap=PEAP              

                 identity="youridentity@youridentitydomain" 

                 password="yourpassword" 

                 phase1="peaplabel=0" 

                 phase2="auth=MSCHAPV2" 

}

Parallax effect is widely used in web design, which makes two-dimensional images with three-dimensional effect. This function is going to be used on my RCA group project for a comic about Precariat.

Here is an impressive example named Protanopia which uses parallax effect well:

The C# Script to realize it mainly uses transform.position with Input.mousePosition to interact.

Feb-23-2018 15-01-50.gif

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class parallaxEffect : MonoBehaviour {
    public float Margin;
    public float Layer;
    float x;
    float y;
    float Easing = 0.2f;
    Vector3 pos;
    void Start () {
        pos = transform.position;
    }
    
    void Update () {
        float targetX = Input.mousePosition.x – Screen.width / 2f;
        float dx = targetX – x;
        x += dx * Easing;
        float targetY = Input.mousePosition.y – Screen.height / 2f;
        float dy = targetY – y;
        y += dy * Easing;
        Vector3 direction = new Vector3(xy0f);
        Vector3 depth = new Vector3(0f0fLayer);
        this.transform.position = pos – direction/500f * Margin + depth;
    }
}

 

There are 2 ways to make mouse interaction in Unity’s First Person Controller:

  1. Make cursor visible;
  2. Use scripting: Raycasters.

1.Make cursor visible

To achieve this, you only need to switch off “Lock Cursor” in the Inspector of FPSController. After you can see the cursor, you will find the delay feeling of the motion of camera and the motion of cursor. To solve this, you need to adjust “X Sensitivity” and “Y Sensitivity”, normally 8 and 8 is good.

屏幕快照 2018-02-20 下午11.41.56

Then you can set up any “void OnMouseDown () {}” script function.

As the example of changing scene, the C# script will be this:

using UnityEngine;
using UnityEngine.SceneManagement;
public class mouseTester : MonoBehaviour
{
    bool doChange;
    void Start()
    {
        doChange = false;
    }
    void Update()
    {
        sceneChange();
    }
    void OnMouseDown()
    {
        doChange = !doChange;
    }
    void sceneChange()
    {
        if (doChange)
        {
            SceneManager.LoadScene(scene2);
        }
    }
}

2.Raycasters

To achieve this, you have to first understand what a Raycaster is in Unity. Here I find a great definition:

ray is a mathematical device that starts at an origin point and continues on in a specific direction forever. With a raycast you’re casting a ray, cast being used like the word throw. It’s like if you threw a rock and it continued on in that direction forever, it wouldn’t stop until it hit something. You’re interested as to whether it hit an object and what that object was.

Blocking objects would be the objects that you specifically define your raycast to be able to hit.
blocking mask allows you to, instead of passing tons of blocking objects, define the layers in Unity you want your raycast to be able to hit in the form of a bitmask. The bitmask is coded so each bit represents a layer. If your bitmask was 0000000000000101 represented as (1 | 1<<2) or 5then your raycast will only be blocked by layers 1 and 3 and can therefore only hit objects in those layers.

In this sample, you are able to cast a red ray, and when you look at an object (in other words, the ray is casting on an object), in the same time, if you press left click you the scene will be changed to scene2.

Feb-21-2018 00-51-22.gif

  1. Drag the script on “FirstPersonCharacter”;
  2. Adjust “Ray Length” in Inspector of “FirstPersonCharacter”;
  3. Make sure the Tag and Name of target object are same as those in the script, in the example, they are “Interactive” and “Cube”;屏幕快照 2018-02-21 上午12.58.47.png
  4. Create a new scene, and the name of it should be same as it in the script, in the example, it is “scene2”;
  5. In the “File>Build Setting”, you need to use “Add Open Scenes” to include both scenes;
  6. Done.
using System.Collections;   
using UnityEngine;
using UnityEngine.SceneManagement;

public class RaySceneChange : MonoBehaviour {
    //RaycastHit defines the name of ray
    private RaycastHit vision;
    public float rayLength;
    bool doChange;
    void Start()
    {
        doChange = false;
    }
    void Update()
    {
        //Casting a red ray. only for understanding how Raycasters work, you can delect this sentence
        //The first two elements in brackets define the direction of the ray. In this example, the direction is in the middle of the screen.
        Debug.DrawRay(Camera.main.transform.position, Camera.main.transform.forward * rayLength, Color.red, 0.5f);
        sceneChange();
        if (Physics.Raycast(Camera.main.transform.position, Camera.main.transform.forward, out vision, rayLength))
        {
            //Only the object with the defined Tag can be interactived, which is in the Inspector of object
            if (vision.collider.tag == Interactive)
            {
                //nly the object with the defined Name can be interactived, which is in the Inspector of object
                if (vision.collider.name == Cube)
                {
                    //GetMouseButtonDown: 0 = left click; 1 = right click; 2 = middle click)
                    if (Input.GetMouseButtonDown(0))
                    {
                        doChange = !doChange;
                    }
                }
            }
        }
    }
    void sceneChange()
    {
        if (doChange)
        {
            SceneManager.LoadScene(scene2);
        }
    }
}

I also find a great sample here, which allows audience hold the object by pressing E when look at it.

Feb-21-2018 01-03-14.gif

Here is the C# script. You can just replace the last one with it. The process is the same.

using UnityEngine;
using System.Collections;
public class RayCastExample : MonoBehaviour
{
    private RaycastHit vision;
    public float rayLength;
    private bool isGrabbed;
    private Rigidbody grabbedObject;
    void Start()
    {
        isGrabbed = false;
    }
    void Update()
    {
        Debug.DrawRay(Camera.main.transform.position, Camera.main.transform.forward * rayLength, Color.red, 0.5f);
        if(Physics.Raycast(Camera.main.transform.position, Camera.main.transform.forward, out visionrayLength))
        {
            if(vision.collider.tag == Interactive)
            {
                Debug.Log(vision.collider.name);
                if(Input.GetKeyDown(KeyCode.E)&& !isGrabbed)
                {
                    grabbedObject = vision.rigidbody;
                    grabbedObject.isKinematic = true;
                    grabbedObject.transform.SetParent(gameObject.transform);
                    isGrabbed = true;
                }
                else if(isGrabbed && Input.GetKeyDown(KeyCode.E))
                {
                    grabbedObject.transform.parent = null;
                    grabbedObject.isKinematic = false;
                    isGrabbed = false;
                }
            }
        }
    }
}

There is an ARKit demonstration named Portal created by Japanese developer Kei Wakizuka. By touching screen, the portal will appear in the real space in the screen with correct perspective and space distortion effect. Through the portal, there is a digital world, but is an immersive one because the audience can also enter the world, and the real world will be behind the portal.

Now there is a variety of similar projects, and most of them are named as Dokodemo Door which is a magical tool in the Japanese manga Doraemon. In the manga, by the Dokodemo Door, Doraemon and his friends can go to anywhere.

Most Dokodemo Door projects are based on SLAM technology, for example, ARKit, the reason is if the audience use Image-Recognization-based AR technology, it is not so easy to enter into the digital world because the audience has to hold the device aiming on the image which will be a terrible experience. But I will keep my experience on both technology, there must be more than one possibility  as Dokodemo Door.

My experiment is based on Vuforia and ARKit, here are two results, and you may see the difference of two design logics:

Feb-07-2018 21-31-11.gif

To create the negative space is the basic step of this kind of AR experience. Now, I only find one method using DepthMask shader. The idea is “a mask object using the Depth Mask shader. This object will be drawn just after regular opaque objects, and will prevent subsequent objects from being drawn behind it.”

1.Based on Vuforia

1.Components

  1. A target image, which is only necessary toImagine-Recognization-based AR, and it should be a little bit lower than the hole;
  2. A closed 3D object with a hole (I created in C4D) which is used as the entrance for audiences, in my example, it is a cylinder. DepthMask.shader should affect this closed space, to make it as an invisibility cloak to hide other component inside;
  3. An inner box with out top side, it will be the most important part, and may have better construction method if you wanna realize a virtual world, but which is not necessary in my example;
  4. A ball with ShowInside.shader.
未标题-1

2.Shader

1.DepthMask.shader

 Shader "Custom/DepthMask" {
  
     SubShader {
         // Render the mask after regular geometry, but before masked geometry and
         // transparent things.
  
         Tags {"Queue" = "Geometry-10" }
  
         // Don't draw in the RGBA channels; just the depth buffer
  
         ColorMask 0
         ZWrite On
  
         // Do nothing specific in the pass:
  
         Pass {}
     }
 }

2.ShowInside.shader, here is the documentation from Unity about how to cull front or back (normally, it is default to cull the back, so you can’t see anything inside a 3D object).

 Shader "Custom/ShowInside" {
     Properties{
         _Color("Main Color"Color) = (1,1,1,1)
         _MainTex("Base (RGB)"2D) = "white" {} 
     }
 
         SubShader{
         Tags"RenderType" = "Opaque" }  
         LOD 100
 
         Pass{
         Cull Front    
         Lighting Off
         SetTexture[_MainTex]{ combine texture }
         SetTexture[_MainTex]
             {
             ConstantColor[_Color]
             Combine Previous * Constant
             }
         }
     }
 }

2.Based on ARKit

Components

  1. 2 closed objects with holes in the same direction and with different sizes: the outside one should be a little bit larger than the inside one, as the outside one’s function is to hide the inside one by DepthMask.shader, and the skybox material should be apply on the inside one to create a scene with depth of field;
  2. A doorframe for the hole to prevent the edge from being too shape. It can be a real doorframe with high quality texture or a black hole with dynamic shader;
  3. A scene with several objects to create spatial sense of hierarchy.
未标题-1

There are great examples of mixing animation and graphics, by EYEJACK based on AR technology:

EYEJACK has also released an AR book includes these examples:

I feel it is not such perfect in this period, because of limits of devices and algorithm, but a pleasant way to use AR form in the future, with HMD as Google Glass and etc..

So the problem now is how to achieve this effect with present technology in an easy way that designers with basic coding background are able to do.

At first, I had two ideas:

  1. Using shader to make the background invisible.
  2. Using a video with alpha channel;

The first way works well, I learnt a lot from this tutorial, but the shortage of being tough to locate the video in the ideal place makes me think it might not be a good method. Here is the result:

Feb-04-2018 15-28-47.gif

1. Softwares

  1. macOS 10.13.2;
  2. Unity 2017.3;
  3. Vuforia SDK included in Unity’s installation package;
  4. After Effect CC2017.

2. Video

  1. In AE, use effect/keying/color range to make the background invisible, and export two videos: one with RGB channel and another with Alpha channel.Feb-04-2018 15-48-27.gif
  2. Create a new composition and put two videos together as this, then render it as mp4屏幕快照 2018-02-04 下午3.53.14.png

3. AR

  1. If you don’t know how to set up Vuforia’s ImageTarget environment, please follow this Vuforia official tutorial, it is quite basic and I don’t want to copy it here.
  2. Create a plane in Hierarchy, later it will be used as the platform of video playing, and drag it under the ImageTarget屏幕快照 2018-02-04 下午4.01.18.png
  3. Create a new Material in Project, and drag it on the plane, and also Add Component/Video Player on the plane, and import your video in Project and drag it into the Video Clip which is in the Video Player屏幕快照 2018-02-04 下午4.03.15.png
  4. Create a new Shader in Project, and replace all default codes with these, which I learnt from the mentioned tutorial:
         Shader "Custom/Transparent" {
     Properties {
         _Color ("Color"Color) = (1,1,1,1)
         _MainTex ("Albedo (RGB)"2D) = "white" {}
         _Glossiness ("Smoothness"Range(0,1)) = 0.5
         _Metallic ("Metallic"Range(0,1)) = 0.0
         _Num("Num",float) = 0.5 

    }

    SubShader {
         Tags { "Queue"="Transparent"  "RenderType"="Transparent"}
         LOD 200

        CGPROGRAM
     #pragma surface surf NoLighting alpha:auto
 
     fixed4 LightingNoLighting(SurfaceOutput s, fixed3 lightDir, fixed atten)
         {
             fixed4 c;
             c.rgb = s.Albedo;
             c.a = s.Alpha;
             return c;
         }
     float _Num;

        sampler2D _MainTex;

        struct Input {
             float2 uv_MainTex;
         };

        half _Glossiness;
         half _Metallic;
         fixed4 _Color;
         UNITY_INSTANCING_BUFFER_START(Props)
         UNITY_INSTANCING_BUFFER_END(Props)
     void surf (Input IN, inout SurfaceOutput o) 
         {
             o.Emission = tex2D(_MainTex, IN.uv_MainTex).rgb;            
             

             if (IN.uv_MainTex.x >= 0.5)
             {
                 o.Alpha = 0;
             }
             else
             {
                 o.Alpha = tex2D(_MainTex, float2(IN.uv_MainTex.x + 0.5, IN.uv_MainTex.y)).rgb;                        
             }

        }
         ENDCG
     }
     FallBack "Diffuse"
     }

5. Select the plane, and change the shader in the material part屏幕快照 2018-02-04 下午4.17.53.png

4.Test & Build

  1. If you have finished that Vuforia’s official tutorial, you should know how to play it with your webcam;
  2. If the webcam works well, so you can build it into your mobile device with File/Build and Run;
  3. It will automatically run the Xcode software, and this process you can learn from this Unity official tutorial.
  4. Done

The second way is much easier but it has a shortage that it can only work on apple’s devices because it use Apple ProRes 4444 which is only supported on OSX, check Unity’s official documentation about Video transparency support for more information.

1.Video

  1. It is the same workflow as the first method, in the After Effect you will have the result that video has an transparent background;屏幕快照 2018-02-04 下午6.35.50.png
  2. Then you have to export the video as QuickTime format with RGB+Alpha channels屏幕快照 2018-02-04 下午6.38.16.png
  3. On the same window, there is a button named Format Options, click it and in the new window you have to choose Apple ProRes 4444 as the Video Codec屏幕快照 2018-02-04 下午6.40.28.png
  4. Render it and you will have an Unity-Ready transparent video

2. AR

  1. Import it into Unity and in the Inspector click Keep Alpha and click Apply屏幕快照 2018-02-04 下午6.44.16.png
  2. Drag it into Video Clip of Video Player Component of Plane which has been mention in the first method, and create a new Material for the plane which shader should be selected as Transparent/VertexLit with Z屏幕快照 2018-02-04 下午6.47.56.png
  3. Done

I’m back to creative coding and try to restart and may start to note the schedule here.

1# When you greet this little robot, it will be happy – Using Arduino with an 8×8 led matrix and a motion Sensor.

2# Add Speech Recognition, when saying “hey” it will smile, “haha” it will be sad, “byebye” it will show a neutral face.