Within the specific context of architectural design, by transporting the programming environment to the VE, researchers can have architects and clients manipulating the models in VR. Given that, in this industry, the production line involves multiple iterations of the solutions discussed by several stakeholders, having these discussions take place synchronously with the project’s development can resolve the existent asymmetric collaboration
[29]. This, in turn, considerably accelerates the ideation process, thus, saving time and resources.
Solutions that allow for the modification of programs in VR have also been developed. Parametric Modelling Within Immersive Environments
[30], Shared Immersive Environments for Parametric Model Manipulation
[31], and Immersive Algorithmic Design
[32] present solutions to connect AD tools to a VE, where architects are immersed in their models, apply changes to the program, and visualize the corresponding impacts in real time.
All three solutions contemplate a visual programming outline, with the former two
[30][31] only allowing parameter manipulation. This means users do not have access to the entirety of the code in VR, but only to a chosen set of parameters, which they can change via sliders. For this reason, these solutions are framed within the scope of parametric design only, not AD, according to the definitions proposed by
[33]. The third approach
[32] goes further, also supporting textual programming and full control over the program. Nevertheless, and despite having presented multiple code manipulation solutions in the entry, the implementation of these solutions within the proposed system is not discussed, nor is the proposal formally evaluated.
3. Algorithmic Design in Virtual Reality
ADVR aims to aid the algorithmic design process by integrating live coding in VR. In this workflow, architects use a Head Mounted Display (HMD) and an AD tool integrated in a VE to code their designs while immersed in them. In the VE, the generated design is concurrently updated in accordance with the changes made to its algorithmic description. Seeing these updates in near real time allows designers to conduct an iterative process of concurrent programming and visualization of the generated model in VR, enhancing the project with each cycle.
Figure 3 presents a conceptual scheme of this loop: architects develop the algorithmic description of the design using an Interactive Development Environment (IDE) or a programming editor to input the coding instructions into the AD tool, which then generates the corresponding model. From the VE designers, then, evaluate the results and, possibly, modify the algorithmic description from there, thus repeating the loop. Figure 4 presents a mock-up of the corresponding VR experience.
Figure 3. Conceptual scheme of the architect/program/model loop happening in VR: architects develop the algorithmic description of the design in VR, generating the corresponding model around them, whose visualization them motivates further changes to the description.
Figure 4. Mock-up of the ADVR experience: the architects live codes the algorithmic description of the design from within the VE.
In order to provide an efficient coding platform in the VE that will enable this workflow, the following components are required: (1) an interactive AD tool that allows for the generation of complex architectural models, along with (2) a VR tool that can be coupled to the AD framework. This tool must be a sufficiently performative visualizer, such as a game engine, to guarantee near real-time feedback; (3) a mechanism that allows designers to code while immersed, i.e., an IDE or a programming editor; (4) text input mechanisms, as well as (5) language and IDE considerations; and, finally, (6) the ability to smoothly update the model even when its complexity hinders performance. The implementations researchers chose to pursuit for each of the numbered items are described below. Figure 5 presents the implementation workflow, featuring the chosen tools.
Figure 5. ADVR implementation: while immersed in the VE, the user accesses the IDE through the headset’s virtual desktop application. The AD tool is responsible for translating the given instructions into operations recognized by the game engine, which is connected to the HMD through the VR plug-in.
3.1. AD Tool
Regarding the AD tool, researchers opted for Khepri
[34], a portable AD tool that integrates multiple backends for different design purposes, namely Computer-Aided Design (CAD), Building Information Modeling (BIM), game engines, rendering, analysis, and optimization tools. The use of multiple tools along the development process is motivated by their different benefits: while CAD tools outperform the rest in free-form modelling
[35]; BIM tools are essential for dealing with construction information
[36]; game engines present a good alternative for fast visualization and navigation
[37]; rendering tools offer realistic but time-consuming representations of models for presentation; and, finally, analysis and optimization tools inform and guide the design process based on the model’s performance
[38].
3.2. VR Tool
Regarding the game engine, the choice fell upon Unity
[39], since it provides good visual quality, including lighting, shadows, and physics; good visualization performance for average-scale architectural projects; platform independence; and availability of assets; as well as VR integration. Through the Steam VR plug-in, Unity communicates with the two tested headsets: Oculus Rift and HTC Vive. It must also be noted that, despite the fast response guaranteed by the game engine, the capacity for real-time feedback will always be conditioned by the model’s complexity.
3.3. IDE Projection
For architects to be able to program from inside the VE, they need to access their preferred IDE while immersed in VR. To this end, researchers use the virtual desktop application provided by most HMDs (including the ones used for this implementation), which mirrors the user’s desktop in VR, allowing the use of any application and, more specifically, of any IDE.
Figure 6 presents the workflow with the ANL model being live coded in VR. In this case, a change in the facade torsion parameter can be observed. Looking at the pictures in the first row, one may also observe that the mirrored desktop represents a partial visual blocker to the scene. However, it should be noted that this two-dimensional representation fails to convey the full 360° experience the user has in the VE. Moreover, the screen can be moved, scaled, or hidden at the designer’s will, as shown in the second row of images in Figure 6.
Figure 6. ADVR of the ANL model: manipulation of the facade torsion parameter (π, 2π, and 4π).
3.4. Text Input
Regarding textual input, there are several solutions currently available for the use of virtual and physical keyboards. Considering the result obtained in previous studies on the matter
[40][41], researchers opted for the latter solution in this implementation.
Figure 7 presents a use-case of the assembled workflow, showcasing both the IDE display along with the generated model of ANL in VR, and the interaction mechanisms in action, inside and outside the VE.
Figure 7. ADVR workflow: on the left, the VE where the model, the IDE and the responsive virtual keyboard are visible; on the right, the architect typing in the physical keyboard.
Researchers stress that, despite the performance of the chosen solution in comparison to the remaining ones, it does not yet reach that of typing on a normal keyboard outside the VE, particularly for users who cannot touch type and, thus, heavily rely on both visual and haptic feedback. This question is particularly relevant in the present context, as the majority of programming architects are, in fact, non-experienced typists. Hence, other solutions for the problem must be sought and researchers believe the industry will soon provide them, as some interesting new concepts are already starting to emerge
[42].
3.5. Language and Editor
In order to guarantee fast typing results, researchers opted for a dynamically-typed programming language, as these tend to be more concise than statically-typed ones. Although the latter offer more performance in run time and can detect static semantic errors, they force the user to provide type information, which is a lot more verbose. The chosen IDE can also help the user in the typing task, particularly, by providing automatic completion for names and for entire syntactical structures, such as function definitions. For this implementation, they used the Julia language running in the Atom editor with the help of the Juno plugin, a combination which considerably augments the user’s typing speed. As shown in Figure 5, the Visual Studio Code editor was also tested, although the lack of a user friendly menu with shortcut buttons forced users to type more in order to run commands.
3.6. Model Update
ADVR takes advantage of game engines’ ability to efficiently process geometry. As a result, researchers can generate large-scale models, as is the case of the ANL, in a matter of seconds. When applying changes to the model in VR, these seconds are, nevertheless, troublesome, since the AD tool deletes and (re)generates the entire model in each iteration, regardless of the number of changes applied. Consequently, as the model grows, a small lag becomes noticeable and the sudden reshaping of the entire VE is disorienting and makes it difficult to understand the effects of the applied changes.
To solve these problems, researchers implemented a multi-buffering approach that keeps the user in an outdated but consistent model, while a new model is invisibly being generated. When finished, the new model replaces the old one, allowing the user to immediately visualize the impact of the changes. It is also possible to have several models available simultaneously, in different buffers, for the user to switch back and forth between them, facilitating comparisons and improving the decision-making process. Figure 6 illustrates this by showing two different views of the variations created.