This is the Apple interface construction library and how it works

We are still finishing up to put together everything that we have known in one of the most intense and newsworthy WWDCs we have had in recent years.

But I’m sure many of you noticed that in the last 20 minutes of the presentation the tone changed completely. From a user-focused gala, we moved on to talk about pure development. In this part we presented something that is a revolutionary first step towards the future of native app development on Apple systems: SwiftUI .

This is the Apple interface construction library and how it works
This is the Apple interface construction library and how it works

Let’s explain it so that we understand its importance and the step it is for Apple. A step that will have an impact on the apps we use in our daily lives in the coming years .

It was a quiet night in ancient Greece…

Back in 1979, Steve Jobs, Jef Raskin and a few other lucky Apple engineers turned to Xerox PARC, for an evolution that was not yet complete in the world of microcomputing . Computers had begun to arrive in people’s homes, but the fact that they were a text interface with a programming language, BASIC at that time in the Apple II, did not help to popularise them. It was necessary to go further.

On that trip they discovered the Xerox Alto, a computer with a monitor in portrait mode that had a graphical user interface (something no one had ever seen before) and a curious device that controlled an arrow called: mouse.

Xerox engineers, led by Alan Kay, had unknowingly built on a discarded project and I don’t understand the company, the future of computing. But they didn’t just create the computer, because they knew that making applications for a graphical user interface was going to be very complex, so they also devised their own development paradigm that would revolutionize the programming world as well: the MVC model ( model-view-controller ) applied to object orientation .

Alan Kay is the forerunner of graphical interfaces and object-oriented programming for them. His is the phrase Steve Jobs used at the 2007 iPhone launch: “Those who are serious about software should create their own hardware.

In 1983 Apple launched the Lisa and Xerox the Alto, but their high price (over $18,000) sent both products to failure until 1984 when the Macintosh arrived. This one did two things right: the first one had a more affordable price (in comparison, it was $2,495). The second one came with a series of discs with learning guides and some tapes of cassette (yes, yes, tapes) that you put in your tape deck, you gave to the PLAY and you heard how you were guided step by step on how the Macintosh’s graphical user interface worked and the different apps that came on disc.

The cassette tapes that taught you how to use MacWrite and MacPaint

Developing for that computer was a little less than hell. Steve Jobs’ team was dedicated only to the release of the product, but not how to create software for that product . Making an application for a graphical interface was done by graphically programming every line of every button, field or element that was seen on the screen: no code could be re-used, no libraries (or frameworks ) existed and any element was hundreds or thousands of lines. Making developing software easy was the next step Jobs would have taken after the launch of the Macintosh, but as we know was invited to leave the company he co-founded .

So he founded NeXT and there he applied the other 50% of the great discovery he made at Xerox PARC: to develop the technology behind Alan Kay’s GUI application creation . The work of the entire NeXT team was launched in 1988 and was a revolution. With the new Interface Builder application and using the Objective-C language, you could pick up a canvas, drag and drop a button, a text field, a label, a check box . any item. Drag and drop . Then a connection was created between the code and the graphic element (a outlet ) and we could work with the object of that element.

Although conceived by Alan Kay in the 1970s, object-oriented development for graphical user interfaces did not reach the general public until 1988 with the release of Interface Builder for the NeXTSTEP operating system of NeXT computers.

Not only that: everything worked with two libraries that NeXT released with all the code needed to build apps and use those components, as well as basic libraries for working with strings, dates or different types of data: the AppKit and FoundationKit libraries . That was the moment when development changed forever and it was again Steve Jobs who captained another disruptive change in the history of computing. Because neither did the frameworks exist until then.

Interface Builder in NeXT’s NeXTSTEP operating system

This technology was responsible for Steve Jobs returning to Apple in 1996 , since the company he co-founded had run out of development architecture (based on Pascal) after Borland bought the main tool used to create apps for MacOS. And from there came in 2001 OS X and in 2007 the UIKit library that allowed to give life to the first iPhone.

Since then nothing has changed in terms of architecture: we can do more, faster, better, even with a new language like Swift. But the development architecture, the way we make apps by dragging elements onto a canvas and creating a outlet that connects it to the code is the same. The foundation hasn’t changed in over 30 years… until now .

UIKit, imperative interfaces

The architecture that has been used until now, is based on what is known today as imperative interface construction . A construction in which I create a function and associate it to a button (for example). I create an action. When someone touches the button, those lines on the button will be executed. Nothing could be simpler.

But an imperative interface has a problem that makes it less efficient: what we call the “state”. Basically those values that we have in our code and that when we touch must have a reflection in what we have touched. If I press the button, I receive a variable (a piece of data) which is the button itself. And to this one, for example, I can change the colour or its text because it has been pressed . When I change that property of that variable which is the button, the interface has to react immediately and change that color or text. We are changing its “state” .

Every possible value I can give to an element in the interface is a state . If I have a field that may or may not be active, I have a property of type Bool that will either be true or false (it only has those two possible values). True is active, false is not. Two states. But I also have another property of type Bool which is whether the field is hidden or not. I already have four possible states to manage from the interface:

  • Hidden yes, active yes
  • Hidden no, active yes
  • Hidden yes, active no
  • Hidden no, active no

Any element of the interface has hundreds more possible states for each element: color, text, border color, shadow, touched or not… a lot. The complexity is enormous and the more complex it is, the more difficult it is for a system to manage an interface full of elements. Basically because it has to be “attentive” to all these possible state changes and combinations to react to them. Something that is not very efficient and very prone to errors.

An imperative interface must be attentive to the change of any state, and the combinations between so many elements can be millions of different possible states to represent.

The answer to making interfaces more efficient came in the 1990s when layout languages such as HTML began to be used, where we have a perfect example of a declarative interface . An interface that is defined as it is and doesn’t change until the user interacts and switches to another page. But the HTML page itself, as we see it, never changes. It is immutable.

SwiftUI, declarative interfaces

Obviously, using a declarative interface such as HTML, which is completely immutable, is impractical. So the developers set to work to find ways to work with the state, but that any possible changes to it were also declared in advance . This way, if a state changes due to a user interaction, the interface will know what to do but we won’t change it with any code. We already told you what had to happen when we declared it .

Let’s look at it even more clearly: if the interface doesn’t know what can happen to it, it has to be constantly aware of millions of combinations of possible changes . That’s a lot of observers waiting for events: a color change, a field move, a font change, an image move… all the possible changes that an interface can have (all its possible changes of state) are permanently listening and the interface has to be ready to represent them .

But if I make a declarative interface, I am defining (before this is painted) what states it has to observe and what it has to do when it happens. You will only have to be aware of these because everything else will be unchangeable. And you’ll even be able to pre-process all the combinations. So the processing cost of managing that user interface is infinitely lower . Because we are already declaring when building the interface what it can and cannot do: all its rules. So you just have to be aware of these, comply with them and know before you run them how many possible combinations that interface will have. Nothing else.

Today many libraries like Google’s Flutter or Facebook’s React, already use declarative interfaces, and now Apple has jumped on the bandwagon of this trend in development , hand in hand with its Swift language with SwiftUI.

Thanks to SwiftUI, we will also see the real-time correspondence between our code declaring the interface and its result. Using its different components, this interface will be built and drawn automatically for iPhone, iPad, macOS, watchOS and tvOS. Goodbye to the dreaded constraints , rules that had to be given so that our interface would be able to be drawn on any device whatever its size. Now all this is automatically controlled by the system as a browser would do with HTML .

SwiftUI is the step that Apple takes to join the trends that others like Google or Facebook have already made, creating their own libraries for building declarative interfaces such as Flutter or React.

Of course, we have enough properties to use, which allow us to draw what we want without any problem, including animations, dynamic data sources associated with the interface, events … everything you need .

A small example of structure

When we create a new project in Xcode 11, the first thing we see is a new check that tells us if we want to create a project with SwiftUI and this creates a project with a new scene delegate (which will be used, among other things, for the new multi-copy function of the same app open at the same time), but we won’t have a Storyboard as usual where we normally start creating our interface .

Instead there is a file called ContentView which from the scene is used to generate the home screen with code . The operation, without going into much technical detail, is based on a Swift struct (a structure) which I tell you is of type View. This forces the inclusion of a Body variable that will return the different constructors: the body of the screen.

struct ContentView : View {

var body: some View {

Text(“Hello World”)

}

}

What’s inside body is a builder that creates a text and puts it right in the middle of the screen. It’s that simple. If I want the text to be bold, I put a dot after the parenthesis and call the bold() function.

Text(“Hello World”).bold()

If I want to put more than one element I have to use groupings, like a stacked view . With Vstack I create one and put in what I want.

struct ContentView : View {

var body: some View {

VStack {

Text(“Hello World”).bold()

Image(systemName: “book”)

}

}

}

This way I put a vertical stacked view with the text on top and an image of the new San Francisco Symbols character set, which represents a book. Now I put a button on it.

struct ContentView : View {

var body: some View {

VStack {

Text(“Hello World”).bold()

Image(systemName: “book”)

Button(action: {

print(“Touch”)

}, label: {

Text(“I am a button”)

})

}

}

}

Here I tell you that I want a button and pass you two parameters as code blocks (or closures ): action which is what you will do when you press the button and label which is what will be shown on the button, a text. You could have sent an image and made a button out of an image.

In AppleSwift 5 it’s already here: Apple releases its first development version

This is the simplest and easiest thing, obviously. Just a small glimpse of all the possibilities when building interfaces . If we use Catalina macros, we can see the canvas next to the code and add elements by dragging and dropping them into the interface, seeing how it looks and any changes to that constructor will modify the code in real time. Like any change in the code, it changes the interface instantly.

A first step

This technology is an unprecedented step forward, which right now works as a native library in Swift that takes full advantage of the language and its features , which join another reactive programming library (for asynchronous events) called Combine that we will talk about later if you are interested.

But be careful, you have to be realistic. Creating an interface building library from 0 is an epic task, and Apple has only just begun and lets us start using and learning (because we have to learn almost from 0) a new way of making interfaces. But right now works as a layer on top of the previous UIKit, it’s not infrastructure independent . Will it always be like this? No, Apple will make the engine independent. The same Swift language followed that process: in the first version Swift was a different way of writing Objective-C, which translated to this in many elements and data types. And now it is completely independent in its architecture.

SwiftUI is a first step, which right now works above UIKit and not as a standalone library. But just like Swift, which was originally largely a translation of Objective-C and is now a standalone language, SwiftUI has to take more steps in the future.

We are sure that what is now a layer of the old UIKit library, but that allows a different and more practical way of making apps, will gradually get its own independent architecture , until it is completely UIKit and create a new library that we do not forget is compatible with all Apple systems.

Just as with Swift when it was launched in 2014, we are taking a look into the future and can already start working with it . And we go back to the usual: Apple has not invented anything, all this was already invented. But they create their own version with a clear goal: to be the best. For now, they seem to be on the right track .

Similar Posts