Sooner or later, any time-tested technology is coming to its end and should be scrapped in favor of more modern and sophisticated. It happened with 16-bit processors, CRT monitors, at the moment it happens with stationary PCs in the near future will happen with laptops and smartphones. But in this article I want to talk not about the hardware, but the software, how it is altered now and what future technologies can we expect in the next few years and what is open source?
Long way from the ring 0
Microkernel operating system once proclaimed the idea of rendering to user space are not tied to any hardware the operating system components. This was to protect the system from failures and give it a real configuration flexibility and ease of component replacement. However, all this was achieved at the expense of quite a sharp increase system requirements; the performance of this operating system was provided by 10-15% below the OS with a monolithic kernel, so the microkernel quickly forgotten, and it has found its place in the niche of industrial systems (hi QNX), for which fault tolerance was more important than performance.
On the advantages of the microkernel, but not forgotten, and much later, it began to show some features in the classic monolithic kernels. A prime example was the interface FUSE, allowing you to create filesystems yuzerspeys and found its place in the Linux and FreeBSD. Also, in some distorted form model has been implemented in the microkernel technology RUMP of NetBSD, which allowed to make from ring 0 trimmed minimalistic kernel with the required functionality (including a separate driver). But the most unusual was the decision of the development team from FreeBSD kernel to carry out the entire TCP / IP-stack.
In December 2013th notorious among FreeBSD Robert Watson, together with a group of researchers presented the implementation of the Web server Sandstorm, interesting feature of which was built in TCP / IP-stack, to work directly with the network adapter driver. Developing been experimental and intended to prove the effectiveness of this model, which was demonstrated in the comparative performance tests.
The advantage, however, is not achieved by removal of the network stack in user space (this fact did not give any advantages), and through the implementation of two ideas:
- bypassing the multiple layers of abstractions that exist in the nucleus and which impose unreasonable costs on the process of sending and receiving packets;
- creating minimalistic optimized for a specific task of the network stack.
The first idea was implemented with the help of a framework for sending and receiving packets netmap, already included in the FreeBSD. Netmap is a fast and high-bandwidth direct access to the buffer network adapter bypassing the network stack, similar interfaces such as raw sockets, Berkley Packet Filter (BPF) or the interface AF_PACKET, but originally designed to provide high-speed access.
Its implementation is similar to the mechanism of direct access to video memory Linux framebuffer, however, if the latter provides direct access to video memory graphics adapter, here we are talking about a network adapter. Just as in the case of FB, the application wish to use netmap to work with the network card must first display device netmap (/ dev / netmap) into memory (mmap), and then write him a chain of packets in raw form and request the operation of their departure:
# Open the device netmap open ("/ dev / netmap") # Translate NIC mode netmap ioctl (fd, NIOCREG, arg) # Memory-mapped mmap (..., fd, 0) # Then go write operations ... Dispatch # ioctl (fd, NIOCTXSYNC)
Approximately the same is received packets. Moreover, all packages must be stored in its raw form, which means that to achieve the full network application requires a network stack that will run on top of netmap. The main profit here lies precisely in the combination of an incredibly productive netmap, able to send one package for only 70 CPU cycles, and optimized network stack, which in the case of a single application can be simple and sharpened to address only certain tasks.
That’s what has been implemented Sandstorm. In fact, it is not only HTTP-server and Ethernet, and TCP / IP-stack in one package optimized for sending small files. It is an eloquent demonstration of the notorious way UNIX, when the application performs only one function, but performs as well as possible. According to our tests, Sandstorm easily bypasses nginx on Linux and FreeBSD, creating a much lower load on the processor and providing twice and sometimes three times the bandwidth.
On a single core processor with a frequency of 900 MHz is capable of netmap fast forwarding 14.88 million packets per second, which corresponds to the marginal rate Ethernet frames in the channel 10 Gbit / s.
In Linux, versions 2.2 and 2.4, there was a simple HTTP-server intranuclear (khttpd) , high performance is achieved by reducing the number of context switches and copying buffers.
By the way, netmap and Sandstorm is not the first to realize the idea of a high-performance direct access to the AC adapter. A similar technique using a software router Click or traffic generator pkt-gen, running in kernel mode. The idea of exporting a buffer network card in yuzerspeys was partially implemented in the PF BSD RING PACKET and Linux MMAP, but instead of the buffer, they offer access to a special area of memory, the contents of which at the time of sending is copied to the clipboard adapter. In contrast netmap realizes the idea of zero-copy, where after recording packets in memory there is no unnecessary lowering copy performance. There are also art direct access to the AC adapter from user space, but they require special drivers for each adapter, as netmap works with standard.
Render to Caesar the things that are Caesar’s
Released by Google in 2008, Chrome browser in the two years of its existence turned all notions of how should look like a Web browser of the XXI century. In place of heavy design with many functions came the idea of simplicity, speed and security.Besides minimalist interface and notorious JIT-compiler V8, Chrome different special multiprocessor system rendering web pages and performance of plug-ins, which allows the browser to survive in any conditions – even when the Firefox and Opera fell from mistakes in Flash-player and implementation and JS- HTML-engine.
At first, this design was perceived more as a proof of concept, the idea for an idea – took on the browser protection against rare cases of collapse, but it raises a number of specific problems, including too much voracity for CPU and memory resources.However, the move further in the transformation of the web into the operating system of the XXI century, the clearer it became that the idea of multiprocessing is not just the right to exist, and should be an integral part of any modern browser.
Mozilla thought about the transition to multi-process model in 2009 (the project Electrolysis), but went to her a little bit on the other side. For Mozilla transition to the new design has been directed more at improving the responsiveness of the heavy written in XUL interface than on security. Their idea was not primarily a separation process the contents of each tab in a separate process, and the separation of the interface and HTML-engine, so that they can be spread to different processor cores, thereby increasing the overall performance and responsiveness of the browser.
In the course of the project it became clear that the idea is so difficult to implement and scale, that it is easier to roll up and try to use other methods of optimization. In 2011, developers have frozen the project for an indefinite period of time and focus its efforts on other areas. In particular, the revised code maintenance of internal databases (an old problem Firefox), optimized garbage collector, and experience in multiprocess processing used to make plug-ins in a separate process.
In two years, the idea of parallelism as if forgotten, but the work continued, and the result was presented to the public in December 2013 in the form of nightly builds of Firefox with complete separation of interface code and engine handling web pages. In essence, the new architecture was built on the principle of “Caesar what is Caesar ‘: one process involved in the creation of the interface and all related components, and the second painted on canvas result of the processing of web pages.At the end of the results of both internal processes were transferred to the composer, who was driving the picture together and displayed.
As already mentioned, the question of treatment of content each tab in a separate process did not go for all the tabs are still posted one process. Nevertheless, the new architecture has allowed to solve several problems at once. With the ability to spread to different core processes of the browser was noticeably faster, especially responsiveness to user actions. Also, the browser has become more resistant to crashes: the fall of the browser rendering engine is working despite the fact that all the tabs are unavailable (agree, contradictory benefit).
In the future, diversity processing for each page as a separate process, as is done in Chrome. Besides the obvious advantages in the form of fault tolerance and security, the innovation should lead to the optimization of memory management: if each page will have its own process, all leaks of this process will be leveled after its closure, and the total memory fragmentation will drop due to its clear separation between processes .
At the time of this writing, a new feature has not been activated, even nightly builds for its inclusion was necessary to include an option
browser.tabs.remote page about: config.
Many years ago, when the movie downloads to DVD-quality through the torrent-network required for at least two hours, I was always involved in the client option priority load closer to the beginning of the file chunks, so that an hour later the film was already possible to start watching, and his other parts are loaded during playback. Today, when the speed of 60 megabits became standard even in the provinces, this method has lost its relevance, but gave life to a new technology – decentralized Live-broadcasting.
In March 2013 the company BitTorrent Inc., is responsible for the development of the protocol of the same name and the client uTorrent, showed the world a new protocol BitTorrent Live, which uses the principles of decentralized P2P-networks to organize streaming video in real time. On the basis of existing BitTorrent they have created a new protocol, which uses the same idea of producing and distributing pieces of data from different sites with the exception that now it is not about a particular file, and a constant stream of data, more precisely, its small current at the moment site.
Interestingly, just two months before the announcement protocol streaming video BitTorent Inc. introduced the technology BitTorrent Sync to synchronize multiple remote machines to each other.
For the efficient operation of such a system requires the node high enough (but normal for this time) speed and low latency, but it allows you to create a truly democratic homepage broadcasting system that does not require a wide channel for the efficient distribution to many customers. Moreover, with regard to BitTorrent Live the opposite rule applies: the more people will watch the broadcast, the greater will be its availability and total bandwidth. Each node is acting as a repeater.
Unfortunately, as of this writing technical information about the report was not available (except a confused text of the patent), so we can not see how its creators decided to problems such as a small amount of seeders in the first stage of distribution, or in a Multiple copies chunks between nodes can achieve low latency when receiving video. Nevertheless, the client software is already available for PCs running Windows, Ubuntu Linux and OS X, you can use it, including for the organization of the relay RTMP-stream and in their own skin to evaluate the possibility of a protocol.
Also worth noting is that the idea of a decentralized streaming data itself is not new, and the first more or less successful example of such a system has been Tribler, sponsored by the European Union, and subsequently used to create the technology Ace Stream (nee BitTorrent Stream), which to date It has already become quite popular. In fact, it is just an add-on protocol BitTorrent.
Launchd for FreeBSD
In 2010, Lennart Poettering introduced the first version of the system manager systemd, one of the most notable features of which was the system of parallel services run at startup, based not on the relationships between the components and objectives, and resource dependencies. Later systemd has acquired a huge number of other features and turned into an object of ridicule, but it has remained the same architecture. And this he owes his architecture no less than the company Apple – their system manager launchd, used in OS X.
It first appeared in launchd functionality allowed systemd even become the standard for all its contradictions. Its developers have put forward, not afraid of the word, a brilliant idea to use pre-created UNIX-socket system services for the early start of system services (daemon syslog, for example, depends not from cron, but on its UNIX-socket to which it is connected at startup therefore preliminary creation of this socket allows you to run both syslog and cron). It was launchd, though not for the first time, had the idea to combine in a single daemon service init, inetd, atd, crond and watchdogd, which made it possible to coordinate the service to start automatically when you connect to it over the network, launching at startup, schedule, and monitor the services in a demon and combine their management interfaces. Launchd will coordinate the launch of disparate system services and make the process really fast.
Apple has opened the code launchd back in 2005, and then the first attempts were made to port it with FreeBSD. It took over a student Tyler Croix in the framework of the Google Summer of Code has created a half-working port. After that, the project was successfully forgotten, but in late 2013, the author decided to continue the project initiated and launched Open Launchd,under which it is planned to bring the existing code to the operating condition and to publish it in the port.
What can users launchd FreeBSD? All the same, as systemd provides Linux users, namely the record-fast loading system, the possibility of a flexible control over the work of demons, autostart daemon only when it is really needed (for example, when its network port comes a request or need it another demon) and a unified management interface. Unlike simple and concise ini-files used in systemd, launchd relies on configuration files in XML – that developers and users will likely not be accepted in any form, but no one bothers to make your own configuration file format.
While work on porting in the initial stage, but it is, and perhaps a year or two will be possible to have a system to use to download even home systems. Unless, of course, Tyler will not throw all again.
The first version of the X Window server was born in 1983 as a system of remote start graphical applications for thin clients.Since then, thirty years have passed, but the architecture of X’s hardly changed. It’s all the same software for thin clients, able to meet modern demands only due to the huge number of extensions, additions and crutches. According to experts, X.org in its present form is a huge piece of spaghetti code, most of which hung a dead weight and sawed not only because of its unraveling would have required an enormous amount of time.
Over the past twenty years, many attempts to completely replace the X’s, but all of them failed either due to the fact that third-party developers refused to support the proposed options or did not meet the requirements. Cooperate and work on the replacement of satisfying all, it was only a few years ago with the start of the project Wayland, which was attended by the key developers of the X.org.
Stable version of Wayland, however, reached only last year and to date is still experimental option, which can run on top of a small amount of software, not to mention a full graphical environment such as KDE, GNOME, Xfce. But a half years ago we began to pore over the surrounding Hawaii, initially calculated only on Wayland without reference to iksam.
December 25 developers have submitted a usable version of Hawaii 0.2, built on the toolkit Qt5, which has long been ported to Wayland. Setting includes the classic desktop in the style of Windows with its own window manager (composite server) Green Island, the application menu, system notifications, screen savers, and the engine supports multi-monitor configurations.
As additional instruments in the framework of the project also developed an application to configure the system library to simplify the development of applications Fluid, file manager Swordfish, manager to work with archives, image viewer EyeSight, Video Cinema, terminal emulator, set wallpaper and icons. All this with my own eyes can see, setting the distributionMaui or Hawaii by compiling from source.
As a Web browser in addition to the Hawaii perfect ozone-wayland, version of the Chromium browser for Wayland, prepared by the developers of of Intel. The strange name of the browser – a reference to the layer of abstraction Ozone, which Chromium uses to display images. Ozone, in particular, makes it easy to transfer a browser and its derivatives to third party graphics systems.
By its nature, the Open Source world is very conservative. All of us still use technology invented even the founding fathers of UNIX and somehow dozhivshimi to the present day. Monolithic kernel, file system structure, a set of commands, files, devices, graphics subsystem – all there. Many of these technologies have survived their creators, and is an indicator of the validity of their implementation. However, throughout his term, and no changes can not move on. I still sincerely saddened by the death OS Plan9, which was to replace the UNIX, but I can be happy when the dustbin thrown already useless piece of code called the X-Server, a system of System V init is replaced with something really modern. UNIX is fine, but the elderly do not live forever.