The Web
- The Web Was Not Designed Thoroughly
- A Diverse Network of Web Servers
- Web Pages
- The Web Needs a Terrestrial Analogue, a Cyber Space
(Forthcoming sections.)
The Web Was Not Designed Thoroughly
A Case Example: HTML Does Not Provide Rich Text Editing
A web page does not naturally support rich text editing, and although this can be fixed with a variety of JavaScript solutions, it points to a larger issue: the web has not been designed comprehensively. As years pass, more features are attached to the HTML standard, but now it is time for a complete revamping. The reason is, too many parts of the web page are detached from each other. Additionally, too many website needs are dependent on third-party JavaScript projects, functionality that should be built into the web page format itself (currently HTML).
It is appreciated that the W3C is accommodating of the public forming new committees to discuss different technologies, but the result is that there are always new groups working independent of each other, adding more and more piles of technology that do not act in coordination. The lack of central authority in the software design of the web has resulted in a serious price to pay: nothing ever gets better, only larger and more complex.
When web hobbyists want to build up web page features for their websites, like a photo gallery, they are rarely supplied adequate resources by the web standards of HTML, and this is completely unacceptable. They are provided only the bits and pieces that they can tinker with to get anything to happen. There are no gallery widgets built into the HTML spec— in fact, there are no widgets at all— but there are now CSS grid elements you would use to start from the most elementary layer, a grid of containers. Then, you are left to figure out all the rest that actually makes a gallery widget work, such as how to display the photo thumbnails, how they load with JavaScript code, how the user interacts with the thumbnails, etc. Because of this, most people start with a gallery solution that is pre-made by someone else. For a photo gallery, this can make sense, but it actually applies to most anything that needs to be done.
So much is lacking from HTML, and the collection of technologies that accompany it, that a lot of time is spent figuring out what third-party setup will be needed to accomplish normal user interface outcomes. Very likely you will be researching at least a few user interface projects to see which one will fit your situation. Then, those solutions will be mixed by you with other third-party solutions you research. You will have to learn the specifics of each one.
It could have been the case that HTML supplied some basic interface building blocks that let you make interfaces. You don't even get that, though, just div tags: containers and the JavaScript code that lets you hide and show them. Then, you go into CSS and style those div tags. It really is a mess, if people are being honest with themselves. The W3C standards give people with little more to work with than raw code that is recognized by web browser software.
There are no tab view widgets built into HTML, but tab views are a staple aspect of interacting with the computer's GUI. Again, the web only offers you raw div tags you can use to program your own tabs and the tab views. You can get external JavaScript solutions for this, but the options out there are so numerous, it is obvious that such a common user interface element should just be built into the HTML. How long will it take for that to happen? It can take years.
The W3C, the body that oversees the web's technologies, doesn't design web standards for what people really need. Instead, it just keeps adding more subcommitteees and tacking on more things to the web, without examining anything that is going wrong. The result is that technically everything is there to do stuff, but very little of it is tied together and none of it works well in tandem.
The Web Page Should Offer Built-in, Customizable Design Templates
To give another example of why the web is in need of an upgrade, consider that there is no native format for starting a web page from a design template. Since web developers, novice and advanced, always start with a web page design first, there should be a templating system native to the web browser. This may boggle the mind a bit, for those who are used to HTML/CSS being the final layout technology. The reason you would want an independent format for a page template is that when you work with HTML you are working with a very raw format. HTML derives from SGML, a technology developed at IBM in the 1960s for formatting text. It wasn't intended for people to make graphic designs, and it had no objective to enable interactivity.
The introduction of CSS was an afterthought for the web, too, brought up for public discussion in 1994 after web browsers were already in use. Originally, web pages were completely without HTML styling capabilities. CSS is actually a kind of fix for what HTML didn't have.
There have been official, standard web fonts for many years (e.g. Georgia, Times), but there should also be official, stock design templates, which the web developer can start with and customize. This would require the introduction of another technology, a file format that exists apart from .css and .html. In fact, it is arbitrary that a browser can only process three major types of files: JavaScript, HTML, and CSS. They have never worked well together because they were added separately. People use them everyday, not knowing how much more normal it would be to have a more integrated setup.
This is the kind of thing an organization like W3C isn't going to do: provide stock design templates, because there is an extreme impulse in the typical programmer not to commit to any specific outcome so as to make things as flexible as possible. But the result, this time, is that there are no building blocks for people until they buy commercial software, which went ahead and did what the W3C should have dealt with. There really is no contradiction here for the programmer, but since they are the ones who dictate the standards of the web, they don't place value on features normally associated with product managers, like design templates. This is why it is very dangerous to leave core technologies entirely in the hands of software engineers, as they focus on the practical and mundane engineering rather than the overview of how people are using it.
JavaScript, in the same way as CSS, was not part of any HTML standard at first. It was added by Netscape software engineer Brendan Eich in 1996, and it was considered unfinished when it was released. These technologies that many web developers take to be authoritative and expertly designed actually came out of casual circumstances, then came to be accepted as the main technology. Then, no further work was done to surpass their undeveloped state.
The popularity of JavaScript grew starting around 2009, for a few reasons. First, there were libraries like JQuery that turned it into something usable, through some elaborate programming techniques. This, along with user interface libraries written by JavaScript programmers gave JavaScript legitimacy it never had. Second, Google turbocharged JavaScript with their V8 engine, giving the impression that JavaScript working in combination with HTML would provide adequate means for the future of the web, which is definitely not right. That was actually a time when it would have been better to start over with the web as a whole and adopt a different programming design. JavaScript actually only scripts HTML tags and document structure. Its other features, like contacting servers for information, were not part of Brendan Eich's plans to make the web into what it is today.
Replacing HTML with a Format That Is Not Plain Text
We say that a design template deserves its own format next to HTML, but the HTML format itself could be replaced with a file format more accommodating of modern computing. The tags' angle brackets are tedious to type, as are the quotation marks that make up the attributes of a tag. As stated, they originate from SGML, the format on which HTML was based. As is case with programming technologies today, HTML is written and edited in plain text. If it is not edited by the person in that format, the WYSIWYG programs that generate HTML are nevertheless programmed around it.
Many people like that HTML allows a person to view the file as plain text right away, unlike Adobe Flash, which is opaque after it is compiled into .swf. But a future file format for the web does not need to accommodate a plain text editor. It could achieve the same openness presenting an intermediate level of text editing, one that is not quite the final product (WYSIWYG) but also not as spartan as plain text. For making web pages, WYSIWYG editors bring user interface quirks because, unlike a print design, there is logic behind the presentation that sits in the web browser. This logic cannot be easily exposed without the use of inspector windows or overlaying of information. In an intermediate-level text format that has graphical features, dependence on WYSISWYG editors will be reduced.
The only task is to widely distribute this new type of intermediate editor across all platforms or make a specification so that it is easily adopted by any operating system.
Accommodating What People Need in the Web
Adding stock design templates as a web standard recognizes the importance of providing regular users what they need to get moving right away with their technology. Right now, there are more pieces being added to the official web specifications, like CSS elements, rather than looking at a broader view of the situation, to add stock design templates, UI controls, and modalities that go beyond a scrolled page of information.
In practice, people want to start out with the design of the website, every time, not the tiny tags and elements that make the browser render a page. When the smaller pieces are regarded as the starting point, like CSS elements, they leave the regular web user without an overall structure. Commercial software enters the situation to provide workable solutions. While this isn't wrong, to have commercial software for the web, it shouldn't be as necessary as today.
Adobe Flash Had Beneficial Features
Starting around 10 years ago, the web development community, in some kind of irrational fury, tossed out Adobe Flash without looking at its positive aspects. Many types of interfaces were simpler to construct because everything in Flash was consolated into a single development environment, with parts that worked with one another inside a single format. It was proprietary, but just because Adobe Flash had its downsides does not mean that the rest of the way it worked should be overlooked. Plug-ins for Flash were available that could be very sophisticated and the quality of commercial interactive work was much higher in the mid-2000s, when corporations did advertising campaigns in Flash. The HTML5/JavaScript downgrade, which was supposed to help people be free of proprietary Flash tools actually just led everyone back to Adobe with tools that output HTML5/JavaScript.
The rest has been filled in by a barrage of third-party JavaScript libraries. For a while, it was if there was a new library coming out every few months just to allow one part of what Adobe Flash enabled people to do in the mid-2000s. When a 3D feature was added to Adobe Flash, it worked within the systematic arrangment that was the Flash IDE. This is in contrast to adding a JavaScript library to a web page, which does not tie into anything larger than its own set of features.
Most web developers today went along with the trend, enjoyed participating in the bashing of Adobe Flash, but the truth is that the development process was much higher in quality than the contraption that is HTML5/JavaScript.
The Web Server Is Stripped Down
It is remarkable how little functionality web servers provide Internet developers by default.
The WorldWideWeb server was designed without any capabilities for:
- Editing of page content through the browser.
- Communications to other web servers.
- Login names and passwords.
- Indexing of information for searching the website.
Just like the problems mentioned earlier with making web pages, all of these functions have to be attached as secondary apparatuses to the servers and the consequences have been severe: it is a big heap of technological conventions that keeps growing, not a global village platform that keeps getting better. If you want to put together a website, where do you start with the web server? The answer varies based on your needs, but if it is anything more than a few pages, the discussion starts to become overwhelming given how many server technologies are competing to achieve the same, basic outcomes. There are minor differences between them, and many long-time web developers say it is more complicated than ever to put up a simple personal website. There are open source software projects that boast that they have simplied some aspect of the process, but in the end they all bring the same familiar hassles because general problems aren't going to be solved until the web server and its page format is revised in a big way.
There are a multitude of startup enterprises that exist just to help people put together a basic website, with them taking care of all aspects of integration between the web page and the web server. There are a dozen or more major content management systems, each providing its own way of retrieving information from a database and serving it. The whole world is constantly struggling to make use of the web in the least tangled way possible. Yet, even more technologies have been added on, like cloud computing, which makes serving a simple website an even more involved affair.
As anyone who makes websites knows, different parts of the WWW technological design sit in separate, abstract containers, with the web server doing little besides serving pages, connected to unrelated technologies that run beside it. For the lay web user, the database is just too unapproachable to work with, yet without it no data can be updated dynamically. Website solutions that don't require direct interaction with a database, that are user-friendly, come with a price tag. Or they are associated with a large corporate entity that is angling to get more money out of you later. Some intend to keep your data and website if you stop paying them a subscription fee.
What should be a basic feature built into an Internet server, an article editor, is instead the domain of Silicon Valley corporations and open source projects. There are so many custom variations of solutions for performing the same chore, it wastes a lot of time. It would be nice if a large corporate entity tried to resolve this situation. Unfortunately, though, anyone who has made the web 1% easier to deal with can't stand offering it to the public without making a fortune, hoping to get an IPO someday.
These companies exist because the web doesn't tie its functions into what is present in the rest of the personal computer. You can't edit your blog with a word processing program like Microsoft Word, you can't design your web pages with a regular page layout program like Indesign, and what you program for your desktop computer in a major programming language doesn't relate to what is served inside a web browser. The web has its own interests and it has never accommodated any other aspect of the personal computing experience.
The Web Started out Raw and The Consequences of That Persist
Many people assume that the lack of features in a web server was a deliberate design decision, perhaps to keep things "stripped down and efficient," to allow programmers to attach any kind of technology to them. But that certainly was not the intention, as the early days of the WorldWideWeb were completely raw and primitive, sent through dial-up. It's really that the web was never willing to commit to any single technology on the backend, that there wasn't much of a product plan, and so people have always been cobbling together solutions to make it do things it didn't naturally do, like edit the contents of a web page on the web page itself. That still holds true today.
This is especially obvious to those who used services like Prodigy or America Online, which preceded mainstream usage of the web. Although they were commercial and fenced their contents off like guarded private property, they had a major thing going for them: the technology was unified, easy to use, and regular people liked it. You could even unsend an e-mail on America Online. You rarely send a file to another person on Facebook, Twitter, or Instagram, but on AOL it was a regular thing to pass files around for entertainment.
Those who were around will recall that it was tantalizing to move off of a closed service like AOL onto the open plains that was the web, but it was also a major downgrade, with everything suddenly looking cheap by comparison.
The Web Provided an Open Technological Landscape
The downsides of these online services like AOL mattered in the end, chiefly that they prevented any third party from making its own technologies inside their system. You couldn't create an entire landscape of your own data and serve it to people inside AOL. There was no such thing. There was no way to do it if you tried; doing so was out of any discussion at the time. That is fundamental. It's a factor in why they disappeared.
No one but America Online could add technologies for users of America Online and that is absurd when we are talking about programmable computers and the pressures to engage in commercial activity across the Internet. No code written by any user could run on these services and no one could sell commercial goods and services on them. Meanwhile, the web was letting people do whatever they wanted in their own private enterprises, in any direction they wanted to go. The web also worked everywhere without needing to sign on first: in libraries, cybercafes, offices, and at home. Just open the web browser, no need to log in just to look at things. What a restriction it would be if every time you opened a web browser you had to sign in to a corporate server that charged you a monthly fee.
Whatever people wanted to program that tied into the web server, they were free to do it and so Amazon and eBay could appear. AOL didn't stand a chance at sticking around. Nothing but AOL could be part of AOL.
It is the web that kept the promise of an open technological landscape, one where people could try out different technologies at home on their computer, with Java applets, 360 degree panoramic images, and VRML. The Macromedia Flash browser plug-in was also great success, adopted by many large corporations because it could serve complex multimedia interactivity and gave animators plenty to work with. It appeared because web browsers weren't capable of much and, actually, the web servers weren't capable of much either. Without Macromedia Flash and its custom video servers there would be no YouTube because it was Macromedia's Flash video that made streaming video a worthwhile technology for the web.
The Web Comes from Tim Berners-Lee
The web is an Internet technology that was first conceived of and built by one person, Tim Berners-Lee, and this can't be dismissed as a footnote. Even the URL was conceived of and named by him, as was the choice of the SGML data format (originally developed at IBM) for his own HTML. It has always been guided by the non-profit he established and major web technologies have always been free. Because the web has changed so little at its fundamental layers, the design of Tim Berners-Lee is present throughout. It is worth noting, then, that usually the foundations of software frameworks and professional software applications at corporations like Apple, Adobe, or Microsoft are designed by more than one person, at least two. We all know the web enjoys global success and is inseparable from use of the Internet today, but the fact remains that it completely lacks cohesive organization. Maybe it is hard to hear for some, but the software framework of the web now looks like an electrical extension cord that has had multiple extension cords plugged into it, with those also being extended by other extension cords. You really aren't supposed to do it this way. On occasion, it looks like commercial entities are the only ones capable of making new plans for the new web because computer history tells us this. But the history is speaking to a principle, rather than a pattern of the past, that commercial efforts often come first, then copied by non-profits and open source communities later.
The Necessary Role That Commercial Efforts Play
It is an escapable fact that open source nearly always draws from closed source products first. Linux is a reproduction of Unix, a closed source effort at AT&T. Mozilla Firefox's source code is comes from Netscape, which was venture-funded. LibreOffice (OpenOffice) actually stems from the source code for StarOffice, a commercial product once sold by Sun Microsystems. When commercial companies drive the evolution of products, the general software design is unified. Later, open source benefits from the situation.
It would have to be this way, given that open source accepts piecemeal and non-committal contributions from whoever decides to show up. The reverse can also happen, that a corporation commits to an open source project or takes up an existing one as one of its focuses, but this still shows the vital role businesses and startups play in the progress of computing. Without Apple taking over KHTML there wouldn't be WebKit and therefore there wouldn't be Safari, Google Chrome, and all projects that use Chrome internals. Without Xerox, even the GUI (and laser printer) would not have been developed for the office worker. IBM stands for International Business Machines.
For-profit web browsers and web servers appeared in the 1990s, but their parent companies did not ever contribute to the actual software design of the web itself, to influence it to become more cohesive. Only the WorldWideWeb Consortium (W3C) has software design authority, and it has never been willing to overhaul the whole thing, only tack on minor features every certain number years, like HTML5 tags. Its technology committees are comprised of individuals from major corporations like Apple, IBM, Microsoft, and Google, but for some reason none of them are motivated to push the web towards something totally revised. This may be uncomfortable to acknowledge, but if a qualified corporation like Apple decided to revise the web as part of a splinter project, solving all of the problems mentioned on this page, the W3C would become obsolete. Whatever entity did that would fail, however, if they closed their web off or made any single component proprietary. This is what they almost invariably do today, so the web is safe for now. But the W3C is in serious danger if it does not do something different and follow normal software design guidlines of how to make a commercially viable product.
In an effort to please every open source group that might become part of it, the WorldWideWeb has been unwilling to favor any unified model that would prevent the endless accretion of new conventions. If there is any commitment to a certain model for the web, the thinking goes, maybe someone would feel left out, and so the W3C committees shy away from doing what has to be done at this juncture in Internet history, which is start over.
Alternative Views of the Internet Existed
In the 1990s, prior to the web's takeover of Internet traffic and perceived synonymity with the Internet itself ("I am going to use the Internet now" is often regarded as synonymous with "I am going to use the web"), all sorts of alternative, interactive protocols and online technologies were in use, such as graphical BBSs. They showed alternative paths for the Internet, precursors to something else more interactive and technology-oriented. But, without hyperlinks, they were not public-facing or providing interconnectedness, so they did not carry the same fluidity and freedoms the web provided to jump from one resource to another from within a page without having a user id attached to every move. The ability to add new features to a BBS service that would serve information to the masses in an open way also wasn't a topic of discussion; the masses were not going to learn how to use a BBS. Importantly, it was only the web that was easily adopted by non-profits, corporations, governments, so as to distribute information to the general public or at least a wide audience.
The web, because of its structure, did not exclude anyone in society from its use, it didn't ask for any user id, and it was more like a utility than all other Internet technologies. It could genuinely attach every personal computer to the Internet just like a telephone attaches to the telephone system because every computer was bundled with a web browser. If they really wanted, the user could post some information to the web on a place like Geocities, or, much later, Blogger. But the web was the gateway for everyone because it absorbed other technologies: anyone could send e-mail messages to other people through the Hotmail web interface without ever opening an e-mail client application. At one point the famous Aol Instant Messenger, which relied on a desktop application to run, had a web client for those on the run.
The Web Can Disseminate Information, Setting It Apart from Other Early Internet Technologies
Even in its early times, governments, hobbyists, and regular people made use of the web to distribute tables of information. It was easy for small shops and big corporations to list products on their website. No comparable setup existed in BBSs or obscure technologies like Gopher.
But you can't say that the web had its complex future in mind at the beginning. The web became a public utility for the same reason it is unpleasant to deal with: it doesn't restrict you from doing whatever you think will appeal to web visitors, but as a collection of technologies it also doesn't know where it is headed or how it should all come together. The web can reach people, whereas other technologies created sealed off spaces. With the web, to reach people you are limited only by your Internet service provider, your server setup, and your ability to tolerate the web's fragmented state.
It took over the Internet without challengers because no one tried to do anything else to supersede it. Adobe, for example, thought that public frustration with its lack of features and software design was such a sure bet that people would want to abandon it and come begging to Adobe corporation to give them something better. So, Adobe waited around and did nothing to set up an Internet service. This is according to one of the two founders of Adobe, who expressed regret that Adobe had not gotten more involved in the web's development in the 1990s. And, in fact, if Adobe had done something it probably would have been better to look at and easier to deal with.
The Internet Now Centers on Web, Which Hinges on Search
Before 1998, the year Google.com appeared and made searching the web something that was effective, doing a search for web information was an uneven and disappointing experience. The best that people could hope to do was aggregate search results from different search engines and skim through the top results. Apple even shipped an application called Sherlock that did this for people.
The Google founders' Pagerank approach completely obsolesced usage of these early web search engines like Lycos, Altavista, and Hotbot. The public completely abandoned them, in a few years, like dropping Blackberry for the iPhone. There was no reason to use any other search engine; they didn't come close to Google ranking web pages by their connections to other pages.
Yet, viewed from a broader perspective, a search engine that delivers specific search results is not necessarily the best user starting point for making use of a global computer network. It is very narrow, and it is well-known that few people browse search results past the first several items. Even fewer proceed to the second or third page of results. But because Google search worked so well, alternative modes of using the web, like web directories that grouped websites into categories, were progressively ignored. The point is that most aspects of the web were not carefully planned out, various players have entered the situation, and what they offer frequently spends a lot of time filling in for the deficiencies of the web.
In that time, the 1990s, there were no considerations for the future of the Internet or web experience exactly, just an effort to put technology on the Internet that allowed sharing and browsing of information. The buzz was everywhere. A worldwide computer network was exciting and fascinating, opening up images of "cybersurfing" and the "information superhighway." If anything, it was assumed, the future would be continuously unloading advanced technology on everyone, so there was no need to concern oneself with the basic structure of the web or how it ought to be completely redone. Someday soon, it was thought, VRML would turn into full-blown virtual reality, everywhere. The optimism was high, the future looked very bright.
If people had been shown through a crystal ball that the world would center around three or four stifling and simplistic social media contraptions that feed on advertising money and loudmouth behavior, all would have been aghast. There is no doubt whatsoever about this. The ebullient optimism of the cybercafe era is extremely far away from the present day public's ugly addiction to reputation, money, and strife that permeates Facebook and Twitter. People wanted to enter The Matrix, not fight and self-promote on The View.
Yet, some of this was foreseeable in the mid-2000s. People had their reservations about Facebook and Twitter when it arrived; they knew it wasn't a great thing for them and the world. Myspace was a questionable trajectory, too, not liked by everyone because of its emphasis on amassing friend counts. It was a precursor to the relentless displays of vanity found on Instagram, Twitter, and Facebook.
A Diverse Network of Web Servers
In the late 1990s (and into the 2000s), the graphical BBS software Hotline featured a client, a server, and it also offered a third type of server for a person to run: a tracker. The Hotline tracker was what made it possible to find other servers. All that the tracker did was accept notification from servers that they wanted to be listed on it. If a given server was no longer online, the tracker would de-list it. But each client could connect to all sorts of servers because it could get a list from a tracker. In this way, these three server types were interrelated.
This has applicability to what a new web would be. Usually, we would think about designing a web server from what a single web server would be doing and then loading that server with whatever features are necessary to embed comprehensive functionality into the web. That is certainly the initial reaction most programmers would have when confronted with the stripped down setup that is today's web server: just customize it and add things to it. If it gets bogged down at any point later, then distribute its resources across cloud servers.
Instead, the future of the web, its servers, could be viewed as an ecosystem, one which there are different types of servers that perform different roles, but which recognize the other types that exist. There can be servers that play the role of indexes of information gathered from many different web servers. Many of these might be run by non-profit organizations. Some could be light servers that run on home computers; when they are idle they contribute to the whole of a given web activity.
It is this type of ecosystem setup that paves the way for an Internet that is less reliant on large corporate entities to index and serve information, allowing small entities to re-enter the environment. Some servers may only collect and relay images. Others only contain videos. These file servers may link into a different type of server that manages image files distributed across computers in people's homes and offices, so that at peak times there is less weight placed on a centralized server setup. Nothing can compete with huge data centers, perhaps, but it also makes sense that smaller aspects of computing could be offloaded onto the user web for purposes of redundancy, security, and immunity.
This is a very different topic than what people talk about when they are discussing blockchain, because blockchain does not have any intention of establishing different servers that are capable of playing different roles.
A problematic pattern in computing is homogeneity in structure that is applied to circumstances of vastly different purpose. For example, the exact same object-oriented programming structure, classes that encapsulate functions and variables, is used for processing strings just as it is for supervising large groups of operations taking place inside a desktop application. There is no way that such a disparity in purpose would favor a shared underlying programmatic structure. In the real world, it isn't true that every building comes with the same set of facilities or is built with the same materials, as if all made out of freight containers.
Already, there are web crawlers, representing the search engine companies they belong to. This is hinting at a future where there are specializations within web servers.
Some example servers include:
- A server that only serves VR content, or maintains VR representations of 2D page content.
- A server that only manages real-time interactions, including chat, instant messaging and group video sessions.
- A server that manages a directory of websites pertaining to a certain subject only.
Web Pages
The awkward gap between print design and web design
Another company, Issuu, exists just to fill in a deficiency of the web's technological structure, that designing a page for print does not move to the web easily. Print design stays in place and doesn't need to account for different screen sizes. It also doesn't concern itself with the markup technologies that underly web pages. But when it is time to convert a magazine or publication to the web (or tablet-based app), the requirements can be a real hassle. This is even more the case when device interaction varies, from tablet computers to touchscreen phones.
Web design, it should be noted, is a recent notion stemming from the specific technologies that make up the WorldWideWeb. Originally, the web did not have any styling capabilities— CSS was a suggestion made relatively late, in 1994. All foundations of web design come from print design, especially the version of print design that emerged after the "desktop publishing" era began. Print design is the design of elements that remain on a physical page. It became much more sophisticated when the personal computer allowed fast arrangement of images and text, something previously laid out on a graphic design studio desk.
Already obvious in the late 1990s, people who were experienced laying out magazines and print materials were left with few options but to design in an entirely different way, and the compromises were extreme.
The same tools used for print design don't translate at all to the motley group of technologies that is the web. A person trained in traditional print design could not move his skills directly over to the Internet because the WorldWideWeb was never designed to accommodate the existence of page layout programs. It still suffers from this problem. CSS is a design technology that evolves, but it doesn't address problems in a way that line up with print design. Its emphasis on inheritance seems to bring benefits at first glance, but in design they are mostly limited.
The engaging potential of an interactive computer, of course, is that images can do things they could not on a printed page: they can respond to clicks, move around, and make sounds. The computer can show animations, display 3D models, and play movies. This is called multimedia, which became widely known in the early 1990s. But for multimedia to make sense, it first needs to fully accommodate print design.
Flash Has Been Described Incorrectly
Because the multimedia functionality found in a web browser exists in separate pieces, it requires purchasing commercial software from companies like Adobe to produce a unified output. That was the benefit of Flash, a completely enclosed toolset in which the IDE centralized technologies and linked them together.
The Web Needs a Terrestrial Analogue, a Cyber Space
Launch a web browser and there is a blank page. What is even out there on the web? No one has a starting point unless it is in the bookmarks menu or displayed as a favorite on the web browser's home page.
It's easy to struggle and think, "What should I visit now on the web?" or "Why do I get the feeling there is a lot out there, but I can only visit a few news websites in my own country?" This is supposed to be an international computer network that has vast amounts of information gateways on it, but the experience is usually like walking down a short corridor with only a few doors on each side.
Although it was less like this in the 1990s and 2000s, and is much more this way since social media websites consumed the focus of Internet activity, there was always a nagging feeling that the web was much larger than one's personal use of it.
Websites Do Not Observe The Erection of a Building Landscape
Throughout the day, every day of the year, different websites are being built up independently and concurrently. They do not see one another as they are being built. Therefore, they are not being built up in coordination in any way . This differs greatly from the construction of a physical city or country, in which there is information known and being updated about landmarks, utilities, and roadways being laid down.
If it were the case that different websites could witness each other being erected as they were being built up, the opportunities present when buildings are constructed, to coordinate and establish ties, would also be available for websites.
Websites don't collaborate very often. They also aren't like shopping centers that carry more than one shop under a single brand name and location. Instead they are all like individual buildings that sit completely separate from one another. In a regular city, there are sidewalks that flow from one competing shopping center to the next. In this way, not only is the structure of the web invisible to the web user, but it is also uniformly patterned as discrete entities that sit isolated from each other.