Author: Joel

  • Learning Experience Designer Tool

    Learning Experience Designer Tool

    How to build a tool to create story-based learning experiences? I am a developer and we are building up such a tool. I would like to write about the challenges and the steps on this road. But first, start with a story:

    A friend of mine, Marcel, asked me to help him learn Viennese German. He speaks German already, but in Vienna people speak a certain dialect that is not always easy to understand. I realized that this is a real-world need, someone who can, needs and wants to learn something new. This is very valuable.

    So, I’ve started the task – later it became a little challenge – to create an amazing learning experience for Marcel. He liked the story builder game, so I took this as a basis for the experience.

    This game starts with a story and makes the student learn like an AI does: always just predict the next world. A big part of AI is just this skill, to predict the next word well. I believe actually that a certain part of the human intelligence is exactly this, predicting the next … . Our minds are preconditioned with the basis of this ability that gets developed during our childhood years. We as humankind can with the 2010s rise of the AI solutions also use these discoveries to understand ourselves better. Some interesting revelations:
    – It is impossible to train neuron networks starting with null values. Entropy was, is and will be part of the journey. It is natural.
    – To find global multidimensional mountaintops sometimes one needs to throw away the results of local mountaintops. (and waste many tens of thousands of dollars of calculations)

    Planning

    Anyway back to the story and my quest to create an amazing learning experience for Marcel. I started out with the following steps:

    1. Create story
    2. Create markup for the story builder game
    3. Test
    4. Make link available without signing in
    5. Send link

    Prompting

    Creating the story was very easy: first prompt: “find German words typically used in Wien”.  Then after writing around 15 expressions ChatGPT offered me “Would you like some example dialogues or phrases using these? Or maybe you’re looking for a themed list (e.g., food, travel, emotions)?”

    My second prompt: “yes, please make an example dialog using wien words in German, location in Wien city, playful, friendly, funny, B2 level” Marcel talks around C1-C2 level, but when we study new expressions we need to lower the level a bit to have more focus and energy for the new words. B2 level is more than enough for everyday conversations anyway.

    LLM delivered this story (partially shown):

    Markup Creation with Prompts

    Now the markup creation started and this is where the challenge came: None of the LLMs I tried could deliver the markup with one prompt. I tried ChatGPT 4o, Grok 3 and Gemini 2.5 Pro.

    I defined a simple markup language earlier, also assisted by an LLM. Now me and all my LLMs had just to fill this markup text, but it proved to be too difficult for them.

    The beginning of the prompt was like this:
    “GENERATE MARKUP TEXT FOR LANGUAGE LEARNING INTERACTION

    We design a language learning interaction. I need a markup language for the following story builder game for language learning. In the markup language the full story is also stored, but for each word starting from the 10. word a choice of words is also stored. This choice of words has a choice of three different words. Only one word is correct grammatically that is marked with correct=true. All other words are clearly not correct. The user has to choose the correct word to continue.”

    ChatGPT 4o gave choices where multiple words were correct. Grok 3 was glitchy a few times (citing technical errors), then gave very good word choices and remarks (see below) until word choice 12, then it simply stopped with saying “the pattern continues”. Gemini 2.5 gave mostly good choices with around 10-20% where there were multiple correct answers (what is an error in this case).

    Finally I’ve used the Gemini Pro 2.5 version, it was at least a complete answer. It seems for such a story game generation multiple structured calls are needed, a kind of AI calling logic (or AI).

    I’ve sent Marcel the link that worked and he was happy about the story. He found it funny and suggested some improvements like making animations faster, include translations or definitions, saving certain expresions. He is also a developer, he advised using the effort-impact matrix for the backlog.

    Conclusions

    Conclusions I have about this development:

    1. Coding changed to “vibe coding”. Now I am at a place where most of the code is generated and checked by me, I do not need to write it directly. This did not come automatically though: to get to this point, I had to:
      1. Select the best architecture(next.js)
      1. Select the best authorization library and use it
      1. Define application folder structure
      1. Design and create database tables
      1. Design and create root components and routines
    2. Next.js, Vercel.com and Cursor.ai combine well
    3. We need tools not only to call AI in an automatically multiple times, but to check and correct the results in a comfortable and easy way
    4. I believe human developers and architects will be always needed. Humans are better in agency, in the ability to get things done no matter what.

    The author of this article, Zsolt Balai 1977 is a developer working on this „learning experience designer” now to create amazing gamified learning experiences. He is interested in language learning and teaching and believes that real spoken languages were, are and will be important in every age. He did use AI to create some pictures, but did not use any AI for the text of this article. This is why it is awkward sometimes, but at least real and authentic.

  • Retrospective: 3D learning portal ancientc.com 2015-2024

    Retrospective: 3D learning portal ancientc.com 2015-2024

    2024 is the end of the “language playground on ancientc.com” era and I think it is valuable to close this period with this retrospective article.

    This was an effort to create a learning platform with the following properties:

    -3D gamified interactive learning

    -Database driven

    -Multiuser / social interactions

    This effort was not successful, but we learned much during these years. Here are some of our milestones:

    Milestones

    • 2015 Register domain ANCIENTC.COM
    • 2016 February Vision VR/AR Summit Los Angeles
    • 2016 March Receiving HTC Vive
    • 2016 Ancientc.com portal with basic portfolio
    • 2016 HTC Vive navigation asset for Unity 3D
    • 2016 May AR Chinese Characters learning prototype in Polyglot Gathering Berlin
    • 2016 Releasing Learn Chinese with Language Mahjong
    • 2021 UP Academy together with friends
    • 2021 Attempting to publish a 3D multiplayer language learning environment and failing miserably
    • 2024 Closing ancientc.com

    2015 Register domain ANCIENTC.COM

    It was my dream to create a portal for 3D playful language learning. I was looking for a good name and I decided that ANCIENTC.COM is fine because it starts with “A” and it sounds like “Ancient See” which I find a nice expression. This is not related to any other companies or concepts, just an imaginary name. 

    2016 February Vision VR/AR Summit Los Angeles

    I participated in the Vision Summit in Los Angeles, which was an amazing conference about VR and AR. I met important players of this field and felt “included”.

    Here you see Jesse Schell, the author of “Art of Game Design:The book of lenses” talking about his adventures.

    It was a huge surprise that all conference attendees received the brand new HTC Vive set. I still remember the uproar in the conference room as Gabe Newel announces the free gift 🙂

    I think the Vive was the first really well made VR system with excellent tracking. Oculus Rift was also released in 2016 March, but the Vive controllers and handling was better, especially if you were standing.

    2016 March Receiving HTC Vive

    I was one of the first in Europe to receive an HTC Vive VR set. It was amazing, I showed it to all my friends and some journalists. A big step from the DIY VR sets I had before.

    2016 Ancientc.com portal with basic portfolio

    In 2016 I was in California with some friends and we started the portal “ancientc.com” what you can still find under zbalai.com/ac. We were planning to create amazing learning solutions. The hills from this picture are actually from Sedona, Arizona, near a nice and calm Buddhist park. 

    It was surprisingly difficult to create a simple responsive page, what looks good on a phone, tablet and desktop as well. I needed to understand HTML, CSS and quickly learned Bootstrap to do the job. I had developer and designer friends, but I was the main actor behind all this effort.

    2016 HTC Vive navigation asset for Unity 3D

    I met Kevin Mack at the Vision Conference in 2016. He wanted a simple navigation asset for the HTC Vive and we worked a few weeks later some hours to create one. We were looking for the simplest solution and it was simply adding a tiny force to the player controller if the vive controller trigger button was pressed.

    I was amazed to be able to fly freely through my artificial spaces and the ones of Kevin. I even published the asset and it still works with a small fix, after eight years 🙂
    https://assetstore.unity.com/packages/tools/flight-navigation-for-htc-vive-controller-61830

    Kevin published his amazing dream world with a similar navigation:
    https://www.kevinmackart.com/blortasia.html

    2016 May AR Chinese Characters learning prototype in Polyglot Gathering Berlin

    Poliglot congresses and gatherings are amazing chances to meet people interested in languages and in learning languages. I advise you to participate in one of them if you speak more than two languages.
    https://www.polyglotgathering.com/2024/en/

    I’ve created a prototype to recognize Chinese characters in an AR – augmented reality –  app and show if the sentence formed is correct or not. We tried this in Polyglot Conference Berlin in 2016 and we had much fun with it. The accepted sentence list was limited, but polyglots are naturally curious and playful types, so they liked this type of AR gamification a lot.

    The AR system I used that time could recognize max 5-6 characters on a screen, so this posed a problem if you had like 15 characters on screen. The licensing was also not clear, especially if you were to recognize dynamic characters. So because of these uncertainties I decided to pursue other solutions instead.

    Important Insight: in 2015 and 2016 there were like 2-3 good solutions for Augmented Reality apps like Vuforia for example. They delivered a closed source DLL(compiled module) for you to use in your app. You could augment target images only if:

    -they were fixed
    -not more than ten (as I remember, maybe only 5?)
    -you didn’t earn any money
    -you agreed that they callback their servers each time your app starts
    -you agreed to all their licensing terms

    I am talking about the free solution, what was mainly for testing and personal purposes. With all this stuff I am still thankful for the fun apps we could develop.

    If you wanted dynamic AR content in 2016 that gets augmented from certain target images then you needed to pay and sign complex agreements. An AR engine was like esoteric magic.

    This all changed by the deep learning revolution that was (re)started by Geoff Hinton 2010-2020. Today there are open source neuron array based libraries to recognize all square like objects in a video stream and other libraries to recognize any Chinese character. There are open source LLMs to evaluate a sentence and decide, if it is correct, the language level used in the sentence, and describe the sentence and any expressions and words in it. So previously unthinkable and very expensive solutions became accessible now.

    2016 Releasing Learn Chinese with Language Mahjong

    I started developing Language Mahjong in 2012, you can read the beginning of the story here https://zbalai.com/_wp/language-mahjong-story/. After countless challenges and technical hurdles with Unity 3D the development slowed to a halt.

    Only after a random encounter with a friend, Rita, who was learning Chinese for her Astrology exam, did we restart development. 

    Mahjong is a relatively simple mechanic, a version can be done in 2-3 hours. There are however so many technical details connected to app publishing, sounds, graphics that we had to invest around 2000-2500 hours to finish this project. (Main barrier being my own perfectionism.)

    2021 UP Academy together with friends

    There was a great place for young artists, entrepreneurs and developers: UP Center and UP Academy in Budapest (In Buda to be more precise). An amazing friend of mine, Zoltan Tóth and his friends started the project. As far as I know the main reason was to help other young people grow and progress with their own projects.

    Me and a friend of mine, Robert Retter decided to join UP Academy for a few months to learn about many different subjects:

    • Self development/reflection
    • Teambuilding
    • Time management (Pomodoro and other methods)
    • Online Collaboration principles/tools
    • Strategy
    • Marketing
      SEO

    I experienced and I learned a lot on these occasions. I would like to send our special thanks to our mentors, József Kollár, Ákos Vörös, and of course Zoltán Tóth, the main organizer.

    2021 Attempting to publish a 3D multiplayer language learning environment and failing miserably

    One UP Academy mentor in 2021 mentioned that there is a gamified learning Facebook group for teachers in Hungary. We posted a question and asked for help and a very innovative and friendly teacher answered our call. She even joined us – me and Robert – in a session to specify a word learning 3D playground. On 15. July 2021 we had a clear picture of what to deliver. We have promised to deliver a working version in 2-3 weeks.

    I’ve created the background database necessary for the gamified learning environment. I’ve prepared a simple playground with a word labyrinth, a word shooting game and a stone jumper. It was well on its way and then the solution started to crumble:

    We used Unity 3D WebGL build as the frontend and Firebase as backend. We desperately needed a special solution that connected the Unity 3D part with the Firebase, ideally in Unity 3D IDE AND in WebGL too.

    There was a solution like this: A developer called “Twisted Tamarin” a Hungarian guy living abroad developed an asset that could connect Unity 3D WebGL with Firebase. I’ve bought his asset for like 50 EUR and we started to use it. We even had a solution where some words were loaded into the language games.

    Then, somewhen in July 2021 he realized that his solution is pirated to unlicensed apps. So he started to put an additional check each time someone started up an app with this asset included. This is already a bad sign, since you don’t want your app to say hello to a third party each time it starts. (Unity 3D itself sends an UDP package to an address each time your app starts, but this is OK for me. One package. Not one package for each asset I included in my game…)

    Then a routine was running to check licensing each time we started the Unity 3D editor. This routine often – like half the time – couldn’t run well, so we needed to manually login/register/reregister/restart some routines or programs each time the Unity editor started. This made effective development very difficult.

    Then additionally I realized that we do not have an authorization for each individual, just for the group as a whole. This meant that a Javascript-savvy student could crack the system in 4-8 hours. (Maybe 2-4 weeks, or maybe never, I cannot really tell.)

    School networks are very restricted, we didn’t know if our request would work in such systems.

    Unity 3D build time went up from 60 seconds to 180-240 seconds, which made development slower.

    So the effort slowly started to come to a halt. I was looking for an alternative Unity 3D – Firebase asset, invested like 60-80 hours into it, in vain. I was looking for a different database type, then I realized that we would need 1 a database for words and users and 2 a dynamic network data system for Unity 3D like Photon, because Firebase could handle both of these. Unfortunately, I have not found any similar solution on the market that time.

    So around September / October 2021 I had to realize that we cannot deliver our solution… 

    2024 Closing ancientc.com

    Now, eight years after our start in 2016 I am closing ancientc.com in its current form. All articles were moved to learnandsmile.net. If no one is taking the domain, it will be released in March 2025.

    Reasons I am happy 

    • We did not spend the 5000 – 15000 USD to create content. Now with LLMs you can generate quality content for much less and the proofreading costs like 500 – 1500 USD.
    • I did not promise developers and designers that we will get rich from only this.
    • We did not generate thousands of code lines. Now with LLM coding tools like Github Copilot, Cursor, Claude, Llama, … I can test my “intention based development” concept.
    • Most importantly: We have not left paying customers in the ditch. Often if a company goes down it can not fulfil its promises anymore. I would like to create an antifragile system/company that is prepared to deliver on all commitments.

    What can we learn from this?

    This is my standard question in all endeavours we do. So what can we really learn from this?

    I think one secret of Silicon Valley is applying service design (Design Thinking) in their projects. The “empathise, get to insights, ideate, prototype, test” cycle is valuable. Reducing risk as soon as possible is critical. Innovative gigs can be compared to rock climbing. Most people would simply fall down and die without ropes and tools. These ropes are guidelines in Service Design/Design Thinking. Ignore them at your peril.

    What service design principles did I ignore?

    A first prototype can have flaws: We could have published a first 3D language playground with a group authorization only and see if students really can break it or not.

    First prototype is small: we planned 3-4 games for the first word practicing 3D space, it was way too much work. 1-2 games would have been enough.

    Dare to be imperfect: even if you cannot yet edit your words, it is worth testing with a class of students with a fixed set for example. Their positive feedback can give the necessary inspiration for the next steps.

    Skills I have, skills I need

    This are my “Insights Discovery Wheel” positions. I see a reformer/director who desperately needs support and inspiration as complementary skills.

    Interestingly many friends and developer collegaues are predominantly blue. It seems it takes extra effort for “blue” people to get to know and communicate “green” and “yellow” people. (See “The Big Bang Theory” for more details on this 😉 )

    Next Steps

    Applying service design tools for the next steps, like Importance/Difficulty Matrix.

    Collecting skills and resources to continue

    Delivering value as soon as possible

    What a journey it was! Thank you for reading this long long article, I’should have finished like five pages ago 😀

  • Can you imagine a language learning environment in 3D ?

    Can you imagine a language learning environment in 3D ?

    I got these answers from the amazing ChatGPT today:

    Yes, I can imagine a language learning environment in 3D. In fact, there are already some virtual reality (VR) language learning platforms that offer a 3D learning environment.

    In this type of environment, learners can immerse themselves in a virtual world where they can interact with objects and people in the target language. For example, they might walk through a virtual market and practice their vocabulary by identifying different fruits and vegetables in the target language. Or they might engage in a conversation with a virtual language tutor or native speaker.

    The 3D environment can provide a more engaging and interactive experience for learners, allowing them to practice their language skills in a more realistic and immersive way. It can also offer a safe and comfortable space for learners to practice speaking and making mistakes without fear of judgment or embarrassment.

    Overall, a 3D language learning environment has the potential to revolutionize the way we learn languages and make the process more enjoyable and effective for learners.

    How would the architecture and layout for such a 3D environment look like ?

    The architecture and layout of a 3D language learning environment would depend on the specific goals and objectives of the platform, as well as the target audience and language being taught. However, here are some general considerations for designing a 3D language learning environment:

    1. Immersive and realistic: The environment should be immersive and realistic, so that learners feel like they are actually in a foreign country or interacting with native speakers. This could include creating a virtual city with different neighborhoods, streets, and buildings that reflect the culture and architecture of the target language country.
    2. Interactive and engaging: The environment should be interactive and engaging, with opportunities for learners to explore and interact with objects, people, and scenarios in the target language. For example, learners could visit a virtual restaurant and practice ordering food in the target language, or engage in a virtual conversation with a native speaker.
    3. User-friendly and intuitive: The environment should be user-friendly and intuitive, with easy navigation and clear instructions. Learners should be able to move around the environment easily and access learning materials and resources without difficulty.
    4. Adaptable and customizable: The environment should be adaptable and customizable to meet the needs of different learners and learning styles. For example, learners could choose to focus on specific language skills, such as listening or speaking, or customize the environment to reflect their interests or learning goals.
    5. Collaborative and social: The environment should be collaborative and social, with opportunities for learners to interact with each other and practice their language skills together. This could include virtual language exchange programs or group projects that require learners to work together in the target language.

    Overall, the architecture and layout of a 3D language learning environment should be designed to create an immersive, interactive, and engaging learning experience that helps learners achieve their language learning goals.

    Author: Zsolt Balai, software developer and 3D learning space designer. This article was first published on ancientc.com, then moved here to learnandsmile.net.

    All paintings were generated using stable diffusion 1.5 A1111 and ChatGPT.

  • Engage VR Review 2019 December

    Engage VR Review 2019 December

    A friend of mine had his VR machine stuck behind a corporate firewall, so asked me to check out Engage VR on my computer. I was always curious, how it works, so today I’ve spent more than an hour in this amazing system and I can’t wait to share my experiences.

    Engage VR intro

    Purpose

    I think the purpose of this system is to teach students in virtual reality and create 3D learning experiences. You can show in the DNA spiral or medieval castles – and their demolition – in a usable 3D immersive environment.

    Testing

    I started up the Engage VR system on Steam. Steampowered.com is a practical portal to distribute and keep you software up to date. You can download Engage VR for free there and start it up any time even without a VR set.

    I think for VR you would need a minimum graphics card of GTX970 (or similar performance) and decent amount of memory (I would not start with less than 8GB). Of course, the more graphics power you have the more seamless your experience will be. I was testing it on a GTX1080 Ti and didn’t experience any glitches.

    I started the intro recording, what was a nice start. First in a lecture hall the possibilities for profs are shown and then we go out to the fields to experience real immersion and blow up some castles. Then we have moon landing on the menu and WW II scenes closed by a Titanic stunt. The best way to show the features of such system is to actually take the user to such places, so the intro worked very well for me.

    Creating your own content

    It is possible to go to a lecture hall and write on the blackboard with your VR controller. It is an amazing feeling, standing in front of students and drawing them something absolutely silly.
    Moreover you can just pick a location and add premade objects to it. There are like eight hundred premade objects you can put into your scene. There is a pro edition, there you have 1200+ objects according to their site. Still you cannot import your own 3D objects, but you can show any videos or internet sites.

    User Interface and Navigation

    Navigation system was comfortable and easy to use. I am a developer and I was amazed by the seamless integration of all my input devices: mouse, keyboard, VR controllers all functioned in a logical and expected way. In the book “Don’t make me Think” Steve Krug ask the question how an interface should work. The answer is very simple to tell, but very difficult to realize in the real world: as expected. Engage VR reached this point in 2019, their VR works “as expected” at least for me. It is a possible standard for all future VR environments.

    For example you could move by using your left vive touch pad, but you could also jump and turn with the right touch pad. This article doesn’t have the space to go into all the details, but I’ve tried like 10-20 VR navigation systems and this felt the best. I plan to write and article comparing VR motion paradigms and I will certainly include Engage VR.

    Imperfections / flaws

    • Navigation backwards was not always functioning
      cleanly: Pressing forward worked like always while pressing backward worked
      like 70% of the time.
    • Playing 360 videos on Youtube didn’t work: I got
      different directions for my different eyes, it was very confusing. If I closed
      one eye it worked, but I seem to prefer both. (I tried 2-3 videos, on Vive)
    • According to my friend you cannot import custom
      3D objects into this space. You need to prepare your file in a special format
      and then send it to them.
    • At the screen where you choose IFX – your custom
      objects to appear – you have to choose between 800 objects. The existing screen
      works, but today in the world of Google Material Guidelines users are more used
      to a sliding panel than choosing a small number on the bottom of the screen.
    • As soon as I started Engage VR the audio output
      was forced to Vive audio on my computer. Maybe it was intentional, but I
      certainly would like to choose what sound output I use.

    360 movies

    I think 360 movies can revolutionize certain types of education. I started “Gladiators in the Roman Colosseum” and I was totally overwhelmed by being lifted up into the Colosseum pit. You cannot really explain full immersion; you can only try yourself or show to others.
    It is extremely useful for learning: it pushes out students behind the so called “affective filter”(basically bored to death by monotone learning). It gives reality, mass, characters to an otherwise dry curriculum. I cannot prove, but I would bet that the enthusiasm towards Roman culture is tripled after watching a 360 movie like this.

    Conclusion

    I couldn’t check multiple features this time, like user management, collaboration with other team members, building up huge scenes.

    Still I can tell you that we have a state of the art 3D/VR learning system in our hands. I respect the tremendous efforts behind such software. If you are looking for a 3D/VR learning environment you should definitely check it out and consider for serious use. I have my doubts seeing 60-70 year old professors getting „diving goggles” onto their faces for the sake of tuition, but in certain cases it is definitely worth building it up. (Australia comes to mind, where you have few students in great distances.) You just need to „extensive VR use required” into the job description for profs. 😉

    As for other uses it is difficult to tell. For security or escape routine training it might be useful. It does certainly make sense if you have your team members spread around the planet and you would like to collaborate in a common virtual space. I personally would check it out just to experience a state of the art multi-user editable 3D/VR environment.

    PS: Don’t forget to start the amazing 360 video about Roman Gladiators.

    Author: Zsolt Balai, software developer and 3D learning space designer. This article was first published on ancientc.com, then moved here to learnandsmile.net.

  • Budapest VR fly around prototype

    Budapest VR fly around prototype

    Katalin had a television appearance in Budapest and for that she needed an app to show her 3D model in VR. So we urgently sit together and published this small app. You can just stare at different point of interest and fly there directly. The models are quite basic, this is mostly to collect experience about such an app.

    The VR moving experience can be disturbing though, especially if you are new to VR. The first version is set to five seconds flying time and maybe it should be ten seconds. Also the start of the flying should be very slow I suppose. I am not sure about the usability aspect of this solution, maybe this is a learning process for both developers and users?

    Download for Android here:
    https://play.google.com/store/apps/details?id=com.ancientc.budapestvr2

     

  • Platform Independent Multi-user VR Space to Learn

    Platform Independent Multi-user VR Space to Learn

    Can you imagine entering a 3D space to learn languages, history or mathematics? I believe it will be an inevitable tool in our future learning process. This is a growing field, a blue ocean for developers. I am curious how these system will change the field of education.

    There are many prototypes and solutions. Here I just list a few of them:

    The video is of course from the Engage team. Their software is great, it’s worth trying it. You can use it in non-VR mode too on a PC. It may have a few limitations though: You need a windows system to use it.

    So how can we contribute in this field? Maybe by getting rid of some limitations?

    It is possile to create a VR space with the following properties:

    1. VR and non-VR mode: You can enter the same space using VR and using just a normal computer screen
    2. Platform independence: you can enter using cardboard, Gear VR, Oculus, Vive, PC, Mac or practically any other Unity 3D target device
    3. Massive multi-user experience: It is possible for a few hundred users to be present in the same space, mixing VR, non-VR and all platforms.

    Engage VR titanic

    I wonder why this space doesn’t exist already? So we set out to push this concept at least until a well usable prototype stage. What we did:

    • server structure to store user locations and directions
    • client functionality to read all user locations in every 100 ms
    • client funcitonality to post own user location every 100 ms
    • Tested on these platforms: WebGL, PC, Android and Unity editor (should work on any platform)

    Multiuser VR learning space

    Of course this is just a pretotype – I am really embarassed to show this humble picture – but already users from different platforms are present.

    Our next steps:

    • Research similar solutions, like Engage (4-8h)
    • Clarify necessary functionalities (4-16h)
    • Build prototype that we can publish (8-32h)
    • Look for possible partners (writing emails basically 1-3h)

    Our questions for you:

    1. What content would you put into this environment?
    2. What would be your main user device?
    3. What feature would be the most critical for you?

    We are in this together. Let us know how we can help creating our future education platform.

    Engage VR users having fun

    Author: Zsolt Balai, software developer and 3D learning space designer. This article was first published on ancientc.com, then moved here to learnandsmile.net.