Skip to content

Documentation

maite-l edited this page Dec 20, 2023 · 42 revisions

Week 1

Project setup

We started our process for our bachelor thesis by setting up the spaces and softwares we would use throughout the whole project for collaboration. We divided our roles and duties and made a communication agreement to avoid possible conflicts. After setting up our SCRUM-board, we began tackling the tasks on it one by one.

Unpacking the briefing

First of all, we unpacked the briefing: we defined the target audience and an HMW-question as well as a few design principles, before doing some desk research. We looked into the site about the festival that was just launched, and searched for innovative navigation systems used by or developed for events or places like malls. Because the festival partly focuses on Game Tech, we thought this could be an interesting field to take inspiration from, and briefly looked into how players are guided around in games.

HMW question
Screenshot 2023-11-13 at 14 52 25

Target group definition
Screenshot 2023-11-13 at 11 10 46

Design principles
Screenshot 2023-11-13 at 11 11 20

Ideation

Next we individually brainstormed about possible ideas we could incorporate into the solution. Both of us know that we are most efficient if we are not limited by brainstorming techniques. That's why we mostly just dumped whatever came into mind on our digital whiteboard, but we still tried to use some where we could. After this first initial brainstorm, we converged these ideas by first voting on the ideas of the other person we liked, and then using a COCD-box to evaluate the feasibility and originality of these ideas.

Client feedback

We took this COCD-box to our client during our first meeting with them and they were quite happy with where we stood with our initial ideas. They also mentioned they would like some social media aspect to the solution. To continue our ideation, we took the ideas our client liked most, and tried to see if we could make some connections between them, while making sure to think of how social media could be implemented in these ideas.

Concept

Our final concept is centered around a character. At each location there is an information point at the entrance about how this character can show the visitor around. Each location has a different version of the character, depending on what you can learn at that location. Every time you see the character, you can bring it to life on your smartphone. The character then tells you about the location and what you can find there, a fun fact related to the information at the location, which events are currently going on or are about to start...

At each location you can also collect a badge via the character. This encourages people to collect all badges (similar to completing achievements in games), and therefore visit all locations. Once collected, the badges can also be used to share on social media.

In addition to the specific versions of the character per location, there is an additional version that can be found throughout the streets of Buda Island. This can be used as a guide to find a new location. When you bring this version to life on your smartphone, it asks where you want to go, and based on that, it shows the correct directions.

The character has multiple goals: informing at locations, navigating between locations, encouraging discovery of other locations, and encouraging sharing on social media.

Tech analysis

There are a few technical decisions we need to make, including how we are going to bring the character to life and what we want visitors to share on social media.
Screenshot 2023-11-13 at 14 54 38

Design analysis

We also defined what we want the design to look like. This will serve as a starting point and rough guidelines to make a styleboard later on.
Screenshot 2023-11-13 at 14 58 01

Week 2

Fine-tuning concept

After receiving peer feedback, we defined points in our concept that weren't really worked out. On those points, we brainstormed a little again to get a more solid grasp on our concept. We also created a user journey map to see if there was anything we missed. This is where we found out our navigation system wasn't the best yet, and we fine-tuned it. After this we wrote a content list to get a clear view on what we need to work on. In addition, we also started lo-fi wireframing.

These are some things that we refined or added to our concept:

  • Instructions for visitors: what will this look like?
    We first had some sort of booth in mind, but this seemed a bit redundant, as we could just display the info that the character provides on the booth. In addition, the concept isn't that hard to grasp that we need a full booth for it. We decided on having flyers at every location, as well as posters. Before the festival starts, we want to let the possible visitors know about the guide through social media posts.
    In case we want to use image recognition to make the connection between the physical character and the website, we can also include a QR code as entry point to the website.
    We will have to make sure even not tech-savvy visitor will grasp how the connection between character and website works, so we'll probably include illustrated instructions for this.
  • Navigation system: where will visitors find directions right after they exit a location?
    Our initial idea of just have navigation characters scattered across the island doesn't really hold up in this scenario. If people are already lost or looking for something, they're not going to be helped by having to look for something else.
    We decided that the navigation character conversation will be a page on the website you can access without having to find a physical character. We will still keep the physical navigation characters, but we will place them mostly at the entry points of the island. This way they can act as a welcoming guide for visitors just arriving, or they can spark interest for people just passing by that don't know about the festival.
  • Extra interactions: will visitors that don't own a smartphone be able to interact with the character in another way?
    We came up with a simple additional interaction. There would be a button next to or on the character. When pressed, an audio clip would play and tell the visitor a shortened version of the info provided by the digital version of the character. They could also suggest the visitor to, if they are able, visit the website for more information.
    Because this is a feature that is only an extra addition, we are putting it in the backlog for now.
  • Extra features
    Throughout our refining process, we thought of some small extra things we could incorporate to make everything a bit more fun.
    We could include extra badges, inspired by hidden achievements. For example, when you've collected every badge, you receive a superbadge, or you also receive a badge when talking to your first navigation character. We are however not sure about the feasibility of this, as the eventual filter might get too crowded if there are too many badges on it.
    For each badge collected, there could be a detail page, containing info of when and where you collected it, the fun fact that you learned about and the version of the character that gave it to you.
    When talking to the same character for a second time, the conversation could start with a fun little extra thing the character says.

User journey map Screenshot 2023-11-16 at 16 25 54

Lo-fi wireframes Screenshot 2023-11-16 at 16 28 33 Screenshot 2023-11-16 at 16 28 52 Screenshot 2023-11-16 at 16 29 08

Tech stack choice

It was time to finalise the the choices for the tech stack. With some advice from our development teacher and in with the technologies the client preferred, we ended up with this tech stack. The 8th Wall tool is still up for debate, as this is a paid tool, and we don't know yet what the budget will be. Screenshot 2023-11-22 at 11 24 24 Screenshot 2023-11-22 at 11 24 52

Design style and experiments

To define the style a bit further, we made a styleboard. This will likely need another iteration as we are still waiting for the official branding guide of our client. To test the style we came up with so far, we also conducted a few style tests by applying the style to the lo-fi wireframes and started experimenting with designs for our character. Styleboard

Before starting to design all the assets for the characters, some initial design experiments were done around character design. By making a collection of sketches to find a fitting style for the character's look and how the style would be inplemented. Character sketches

Some style tests on a mid-fi wireframe were done as well to test out color hierarchy, fonts, whether the style was fitting with FTI or not etc... Character sketches

Week 3

Start of development

Next.js
Starting the development of this project began with reading up on Next.js. We were not yet familiar with this framework, so we started out by reading a lot of documentation. Afterwards, we also followed their tutorial on starting your first Next.js app. Something that we struggled with, was the relatively recently introduced App router. This is a new feature that leverages React Server Components, which improves performance. Because this is a very new feature, the documentation wasn't that thorough about it, and it was also not yet used in the tutorial we followed. However, the docs stated that it is recommended to use the App router instead of the previous Pages router, so by watching a few video's and reading a couple articles, we made sure to mostly understand the concept.

Conversational UI
To get to know the libraries we were planning on using, for each of them we made a demo. Of course here, there were also a couple challenges to tackle. For the conversational UI, we found react-simple-chatbot perfectly fit our needs. It's made for react, the conversation can be written out in a clear JSON object and it's quite customisable. Implementing this demo seemed to be going well, until components started rendering twice. There were errors in the console, so we tried looking into solving those, but they came from the library itself, so no luck there. Thinking there was no way to fix it, we looked into implementing other chatbots, but all of these either weren't made for react, which made implementing them complicated, or were too complex for our need which made implementing them quite difficult as well. In the end we found the solution to the problem from the first library, hidden in YouTube comments. We were very happy we got over this obstacle, and fleshed out the demo a bit to have a quick reference for the future.

AR face filter
While talking about the wireframes of the badge page, we realised that using 8th Wall for the face filter might not work out the way we thought it would. With the subscription plan we proposed to our client, we would be able to embed a site containing the AR, hosted through 8th Wall itself, on our website in an iframe. This cannot communicate with our own site in a straightforward manner. This is crucial though, as we want users to only see the badges they collected on the filter, and this info will be stored in the localstorage of our site.

Before we found out about 8th Wall, we had already started experimenting with a free library. Through doing that, we found out that with React Three Fiber, we would be able to conditionally render parts of our 3D model that would be used in the filter. This approach is way more flexible: unlike the limitations posed by 8th Wall, using React Three Fiber gives us the ability to establish seamless communication between the AR filter and our website. This first library however also presented some challenges, as this was an older library and gave us a lot of dependency issues. We managed to find another, more recent one, which had even been adapted into a react version (mind-ar-js-react). After finding the right way to export Blender models to load them correctly, we managed to create a first working demo using react components for all the parts of the model.

Something that was included in the original mind-ar-js library but not the react-three-mind, was an occluder for the head. This basically 'cuts' the shape of a head out of the 3D model to simulate depth. In the original library, this was as simple as adding an entity with the right property. In the library we are using, we had to make this from scratch. We found a simple free model of a head online that was available for commercial use, and some Three.js code to turn an object into an occluder. After figuring out how to turn this Three.js code into R3F code, we were able to use the head model as an occluder.

An essential part of the filter, is a feature that allows the users to take pictures with it. This proved to be more difficult than expected. To achieve this html2canvas seemed like a good option. This however didn't include the most important part of the filter: the three.js part. We figured out this was because we had to turn the preserveDrawingBuffer on. A complexer problem is that when the 'screenshot' gets taken, the filter doesn't line up with the image from the camera. This is something we haven't been able to fix this week. In addition, resizing the window also resizes the model of the filter. We'll have to look into this as well next week.

Design: Setting up wireframes

Conversation flow
We made a conversation flowchart, to visualise and structure how the conversation flow of the chatbot would go, this way making the wireframes would go much smoother. Screenshot 2023-11-27 at 20 00 53

Mid-fi wireframes
We updated the low-fi wireframes we previously made to mid-fi, to make a more clear view of the website, to see wha things could be improved or added to the wireframes. Screenshot 2023-11-27 at 20 04 05

High-fi Wireframes
Then from those mid-fi wireframes we made the high-fi wireframes, with more detailed text content. Then when the mobile version was mainly put down, we put comments on parts that could be improved/changed to make a second version with the improvements implemented later and also the tablet version. Screenshot 2023-11-27 at 20 06 33

Design: Character details

Naming and description brainstorming
We made a schema to visualise the different characters in our concept, and what they stand for. Also doing a small naming brainstorm to come up with a fitting base name + variants for the name to go with the different versions. We have not been able to fully fill in all the info yet, since we still have limiting info about certain events or locations. Screenshot 2023-11-27 at 20 21 19

Character variants
We are also still experimenting a bit with the character, currently focusing on one design which seemed most fun and fitting to the concept and making variants of that one. We are waiting for the client to give us a green light on the character design we want to go for. Screenshot 2023-11-27 at 20 28 20 Screenshot 2023-11-27 at 20 31 57

Week 4

Development: setting up the data system

To store all the information used in the conversations, we are using simple JSON files. For each location there is a separate file for all the info. This way we can create slugs from the titles of the files and make use of dynamic routing, which gets rid of a lot of code repetition. In addition, we want to provide info about current or future events at each location. Because it would be pointless to tell the user about events that have passed or events too far into the future, we want to dynamically insert info into the conversation based on the current date and time. For this we wrote some logic to compare the current date and time to the date and time of the event, and write custom messages based on that.

Development: advancement of the demos

Both the chatbot demo and the AR filter demo still had some issues that needed fixing or improvement.

AR filter
For the filter, like mentioned before, we were still struggling with the picture-taking part. Our first approach to fix the alignment issue, was changing the CSS right before 'taking the picture', and then changing it back to the original.

const handleTakeImage = async () => {

  const videoStyle = document.querySelector('video').style;
  videoStyle.marginLeft = 'none';
  videoStyle.objectFit = 'cover';
  videoStyle.width = '100%';

  const image = document.getElementById('image'),
    canvas = await html2canvas(image),
    data = canvas.toDataURL('image/jpg'),
    link = document.createElement('a');
  link.href = data;
  link.download = 'image.jpg';
  document.body.appendChild(link);
  link.click();
  document.body.removeChild(link);

  videoStyle.width = 'auto';
}

This worked, but only partly. The library doesn't support certain CSS properties, 'object-fit' being one of them. This defaulted to this property being set to 'fill', which of course squishes or widens the image. This meant we had to take a different approach. Our next idea worked: because the alignment was the problem, we figured we could just take a picture of the camera input and one of the 3D model, and then align them ourselves. Capturing the camera input worked right away using html2canvas, but with the model there was a problem. The result of capturing the 3D model had a white background. First we just overlayed the images with a 'mix-blend-mode: multiply' on the 3D model image, but this of course didn't look great, and would mean would definitely not be able to use white in our model. However, it did show us that our assumption was right and that we could align the images correctly ourselves afterwards.

When looking for a library to 'take a picture', we also found another one called 'dom-to-image'. We quickly stepped away from that one however, because this one couldn't capture the video element used for the camera input. But when stumbling upon the background problem, we remembered the 3D layer captured by this library did seem to have a transparent background. This proved to be true! So for our final function to take a picture, we ended up combining the two libraries to capture each layer, and then correctly aligning them ourselves again in a new container.

const handleTakeImage = async () => {
  const ARView = document.getElementById('ARView');

  // Get the video and filter elements
  const video = ARView.querySelector('video');
  const filter = ARView.querySelector('canvas');

  // html2canvas to get the video as an image (dom-to-image doesn't support video element)
  const canvasV = await html2canvas(video);
  const dataV = canvasV.toDataURL('image/jpg');

  // dom-to-image to get the filter as an image (html2canvas doesn't support transparency in canvas element)
  const dataF = await domtoimage.toPng(filter);

  // create a container div to hold the images
  const container = document.createElement('div');
  // add styling to position the images on top of each other
  container.style.display = 'grid';
  container.style.overflow = 'hidden';
  container.style.gridTemplateColumns = '1fr';
  container.style.gridTemplateRows = '1fr';
  container.style.justifyContent = 'center';
  container.style.alignItems = 'center';
  container.style.justifyItems = 'center';
  container.style.transform = 'scaleX(-1)';
  container.classList.add('image');

  // Create img elements for the images
  const imgV = document.createElement('img');
  imgV.src = dataV;
  imgV.style.gridColumn = '1';
  imgV.style.gridRow = '1';
  const imgF = document.createElement('img');
  imgF.src = dataF;
  imgF.style.gridColumn = '1';
  imgF.style.gridRow = '1';

  // Append images to the container
  container.appendChild(imgV);
  container.appendChild(imgF);

  // Append the container to the body
  document.body.appendChild(container);
};

Chatbot
In the chatbot demo, there were no real issues, but there was something we wanted to improve. For UX purposes, we wanted to differentiate between primary and secondary options through styling. Sadly there wasn't nothing in the API that allowed us to style individual options. We tried to pass html as the label for the option and this actually seemed to work, until the option was clicked. The other option was to write a custom component that behaved like the regular options step and add a prop to determine the styling. After some trial and error, we got this working. However, with fixing this UX issue, we created another one. Usually when clicking an option, a step on the user side is triggered to reflect what was clicked, further enhancing the conversational feel. With our custom component this wasn't the case. We read through a bunch of Github issues on the library repo, and found that this was a feature was requested several times, but sadly wasn't implemented, and also no other solution was proposed.

Design: prototyping

Prototyping high-fi wireframes
We finished up the second version of the high-fi wireframes and prototyped them to test out the interactivity of it, and so that maite could easily see how she needed to link certain things in her code. Screenshot 2023-12-20 at 02 19 17

Mobile & tablet design
After finishing the wireframes we started designing the mobile website. This started of with a bunch of colour hierarchy tests again to decide which colours we wanted most prominent and what not. Screenshot 2023-12-20 at 02 24 54 Screenshot 2023-12-20 at 02 25 05 Then after these colour testings we started doing other pages in that chosen style, both for mobile as tablet and making iterations on those, also making it prototyped to put up for a user testing to see early on what features can be improved. From this we learned and decided that changing the top navigation to a bottom navigation is a better option Screenshot 2023-12-20 at 02 29 27 Screenshot 2023-12-20 at 02 29 36 Screenshot 2023-12-20 at 02 39 49 Screenshot 2023-12-20 at 02 43 37

Character design
For the character design we made a few more versions to put into the first initial iterations of the designed prototype. Screenshot 2023-12-20 at 02 31 30

Development: integrating chatbot in project

Creating the chatbot demo gave us a solid understanding of how this library is used and how we can implement features that we will need. With the demo finished, we could incorporate it now into the project. This posed us with another couple of challenges; like mentioned before, we are using JSON files to store all the info and avoid code repetition. In our demo, we figured out that we will need functions inside this info to add info to localstorage to keep track of the badges collected and locations visited, and also custom components for things like a map to guide users to the location they're looking for. Because the custom components don't need interactivity, they were just written as JSX inside the component property of the chatbot component. Of course, in JSON, functions and JSX aren't supported. That meant we had to find a workaround.

For the custom component, we turned the JSX into regular HTML, and made a string out of that. Because we know we'll only use the component property for simple components, and thus the component property key won't hold anything else than HTML strings, we go through the all the objects, and turn the string value of the component property into HTML using dangerouslySetInnerHTML. For the functions, we have a similar approach. In the demo, stored the function in the trigger property because this was one of the few properties that supports functions, and it seemed logical as we could just return the id of the next step. To make this work in the JSON file, we turned the trigger property into an object that holds the function name and its arguments. Then again, we also go over each object and if trigger property has a function with the value of "addToLocalStorage", we return that function for the trigger property instead of the object.

We then end up with this code to turn our original parsed JSON file into a usable object for the chatbot component.

    steps = steps.map(step => {
        const { trigger, component, ...rest } = step;

        // if there is a component property, parse it as html
        if (component) {
            let content = step.component;
            rest.component = (
                <div dangerouslySetInnerHTML={{ __html: content }}></div>
            );
        }
        // if the trigger is a function, add function in trigger
        if (trigger && trigger.function === "addToLocalStorage") {
            rest.trigger = () => addToLocalStorage(trigger.arguments.trigger, trigger.arguments.key, trigger.arguments.value);
        } else if (trigger) {
            rest.trigger = trigger;
        }

        return rest;
    });

Development: testing NFC tags

Due to a late post delivery, we could test our NFC tags a few days later than planned. We chose to work with NFC Tools to write our tags. This was an app with decent reviews, available for the Android phone we are using. The process of writing was quite straightforward, and it was also easy to lock and unlock the tags to prevent visitors from tampering with them. After writing one, we tried to tap it with two different smartphones. Luckily this worked like expected: without an app installed. On one phone, the linked site opened right away, on the other there was a notification that linked to the site. Something that we'll have to take into account is that the setting for reading NFC tags might be disabled, so we'll have to communicate about this in the explanation of our concept to visitors. Something else that will have to be clear is that the phone has to be unlocked when trying to tap an NFC tag, and that the spot that scans it can differ per phone.

Week 5

Development: deployment errors

Last week we were able to set up a deployment flow for a project that worked fine. Because we are working with Next.js, we are using Vercel for deployment. As we are integrating the more advanced features this week, we have run into some problems. Server side rendering is something that is new for us, and while developing this obviously hasn't been much of a problem. When deploying however, we need to watch out that what we are doing is actually possible before we reach client side, or we only do it there.
A big issue was figuring out how to change the content of the chatbots based on localstorage, as the window object is naturally not yet defined on server side. We had to make use of of a loading state to make sure the content was adjusted correctly before rendering our chatbot component.
Another deployment error was related to the library we're using for the AR filter. It took us a while to figure out that that was where the error was coming from because it was a very generic error that seemed to point to the problem being the loading of files located in the 'public' directory. Luckily, among the few issues on the github repo of the library, there was the same error posted with a fix. It was stated that server-side rendering could pose different problems with this library, so just turning this of for this component fixed our error.

Development: start of styling

Most of the rest of this week consisted of writing new react components and css for them. We are using css modules for the first time, which has been really useful for scoped styling.

Design: Finishing character design

This week was much more focused on the character design and finishing up making a character fitting for each location so that we could have these displayable for the testing moment. Screenshot 2023-12-20 at 02 34 37

Design: Starting badge design

Next to the character design we also made some tests for the badges, iterating on some different badge styles to see what fits the concept best. Screenshot 2023-12-20 at 02 52 47

Design: Mobile & tablet design

After changing to a bottom navigation we made further iterations for the mobile and tablet design. Also improving the hierarchy and flow of the chatbot with what we learned out of the user testing. It was important that everything was prototyped for the testing moment.

Week 6

Development: more deployment errors

During the testing in week 5, we found out that the picture functionality of the AR filter wasn't working like it's supposed to on mobile Apple devices. Sort of luckily, the same error was occurring on the Safari browser on our Macbooks, so we could debug it there. We found out the problem came from the second library we brought in to take screen-captures of the DOM (dom-to-image). There was no real error message we could get clues from, so we opted to look for a new solution that got rid of this library again. Our original problem that caused us to switch to this dom-to-image, was that the original library, html2canvas, rendered the AR filter with a white background. After looking a bit more into this library, we found out that there is a property we could set to determine the background colour. Simply putting that to null made the background transparent and the image usable for layering.
This is what the final simplified, working code looks like. Compared to the last code snippet on this topic, there are some more differences than the one we just talked about, but this is simply because at this point we've taken more advantage of React, so useState really came in handy here.

    const handleTakeImage = async () => {
       const ARView = arFilterRef.current;

        // Get the video and filter elements
        const video = ARView.querySelector('video');
        const filter = ARView.querySelector('canvas');

        const canvasV = await html2canvas(video);
        const dataV = canvasV.toDataURL('image/jpg');

        const canvasF = await html2canvas(filter, { backgroundColor: null });
        const dataF = canvasF.toDataURL('image/jpg');

        setImageV(dataV);
        setImageF(dataF);
    };

When we thought we had finally solved all related to the picture taking functionality, we discovered that there was still another problem, again only on ios. This time, the issue was only on mobile Apple devices, not on desktop Safari as well, so we had to do a lot of test deploys to test on an iPad. The problem was that when a badge has been collected, the filter would not allow taking pictures anymore. There were no errors however. At first, we thought it might have something to do with useEffect used in the badge overview component that caused a re-render. After a long, tedious process of turning parts of the code off and on, we eventually figured out that the images that get rendered when badges have been collected, was causing the problem. For these images, we were using the Next.js Image component, which is built on top of the regular html img tag to optimize images. When replaced with the regular html tag, the functionality worked again. We are still unsure how images of one component can have an influence on the functionality of another, but we are happy we solved it.

Development: other errors

When navigating from the page with the AR filter to another one, we were getting a 'RuntimeError: abort' error. We supposed it had to do with the filter not being cleaned up properly. The library for the filter has a function 'stopTracking'. When we ran that and navigated to another page, we had successfully gotten rid of the error. The difficulty is running the function automatically at the right time. We tried running this function in the return of a useEffect, supposedly running it right before the component unmounts, but there it couldn't find the reference we had set to the filter. So far we have not found a solution for this, but luckily the user can't notice this error.

Design: Finishing badges

To be able to start on the 3D filter, the badges had to be ready so we quickly laid a focus on the badges to finalise those and move onto the 3D versions of the badges Screenshot 2023-12-20 at 02 57 00

Design: 3D badges

Design: 3D filter model

Design: Character animations

Week 7

Development: finishing up

The last few days of the project were spent adding in assets and making sure the 3D model works. In addition, we took a critical look at our project structure, and cleaned this up a bit. The biggest rework was is the data structure of the json files. We were keeping the info for the conversation and badges in separate files. This caused us to have doubles of some of the info, which of course is not good at all. We reworked the json files into one file per location plus one for the navigation conversation, and added all of the badge info in there, so we weren't repeating any data anywhere.