Future Technology: Prediction Of The Next Decade In Terms Of Technology 2020-2030

FUTURE TECHNOLOGY

Flexible portless smartphones with powerful ultrabook performance

It’s easy to follow the development path of smartphones and highlight the main trends. Firstly, diagonals are constantly growing: 5.5 ”Galaxy Note II in 2022 was perceived only as a tablet, and now 10.2” Huawei Mate we consider just a smartphone, albeit a little big. But, obviously, it cannot go on like this anymore - if a 7 ”device can still be put in a jeans pocket, then 8” tablets already require a separate bag, which obviously affects their popularity.

Understanding this, manufacturers began to reduce the display frame in parallel: first the side, finally removing them by 2025 in the Galaxy S20 EDGE, in which the screen is rounded and smoothly transitions into the case. After that, they set to the upper and lower frames: and if everything was simple from the bottom, since pressing physical buttons without problems turned out to be gestures and integrated the fingerprint sensor into the screen, then everything is not very good from the top - the front camera is still doesn’t give up, so it either appears as a small dot on the display like the Galaxy S20, or it is built into the case and is mechanically pushed out of it, as in the case of OnePlus 9.

As a result, we sorted out the frames, by and large, and now you can buy a 6 ”smartphone with dimensions of 5” two to three years ago, which is quite pleasant, although not always convenient - with a decrease in the thickness of the frames, the number of accidental clicks on the edges of the display has increased. But where to grow further? A possible path was shown by Samsung with their Fold and Huawei with Mate X, we are talking about flexible smartphones.

This idea is quite obvious: since it is no longer possible to directly increase the diagonal, and the content requires it, why not make a folding gadget? Of course, you can just go buy a tablet and not be tormented, but not everyone wants to use it with a smartphone, which is clearly visible in the fall in their sales with a simultaneous increase in the diagonal of phones.

Few people will argue that the production technology of flexible OLED displays is crude, for example, Samsung has postponed the start of Fold sales for half a year, and now they can only be bought in several stores in South Korea. But the fact that such gadgets are the future, hardly anyone will deny. What's next? The technology for manufacturing flexible displays is polished in a maximum of a couple of years, where to develop further?

A possible solution would be to create a smartphone with two folds. What for? It's simple: take the same Huawei Mate X https://www.aimagnus.com/2020/06/best-smartphones-of-2020-on-basis-of.htmland decompose it. What do we get? Large 8 ”display with an aspect ratio of 8: 7.1, in other words - a large square. And now let's recall the current trends for “narrow” monitors: most of the consumed content goes to 16: 9 or 21: 9, and the square display will either be cut off substantially or played back with frames half the screen.

So the addition of another fold can solve the problem - in this case, it may well turn out a 10-11 ”tablet with the usual and convenient 16:10 aspect ratio. But, obviously, it will not be spent a year or two, but most likely even more than five years.

Another easily traced trend is the reduction in the number of ports and buttons: first, they removed the 3.5 mm headphone jack, then the physical button home. In addition, the flagships of the past few years have fast enough wireless charging, so the need for USB-C or Lightning can also disappear. As a result, at the beginning of this year, Meizu introduced the Zero smartphone, completely devoid of buttons and ports. Instead of a physical SIM, eSIM is used, the side buttons are replaced by touch panels that take into account the force of pressing, and the device is charged using powerful wireless charging as much as 18 watts.

Unfortunately, this device is still not for sale, and taking into account the fact that Meizu is currently going through hard times, it may not see the light at all. But nevertheless, the Chinese company has demonstrated that the gadget is absolutely possible without ports and buttons, and there is a chance that in a few years all smartphones will be like that.

You can also notice that some smartphone manufacturers, such as Apple, began to seriously rest on the power of mobile CPUs and GPUs: for example, the A12X is already comparable to the consoles in its capabilities, and, obviously, the A13 will be even faster. Windows tablets work without problems and brakes on Snapdragon 845, at times reaching performance levels as low as Intel Core. And given the fact that an increasing number of people are abandoning the PC in the direction of pocket gadgets, it is quite possible that in a few years mobile chips will become so fast that various games or software will come out not only for desktop OS, but also for Android with iOS .

Of course, progress is waiting for mobile photography: over the past 5 years, it has reached a sufficient level even to get clear pictures at dusk or at night, so a considerable number of users no longer need a separate camera. What progress is possible here? No, most likely it won’t reach ten-chamber smartphones, but getting a 3D model with instant processing and applying a suitable background doesn’t look fantastic at all.

What, in the end, could be the smartphone of the future, if we combine all of the above? Flexible, with three screens, without ports, buttons and frames, with a powerful processor and an excellent camera. Unfortunately, one cannot count on a significant improvement in battery life: the technology for producing lithium batteries in the market for 30 years has not changed much, and one cannot count on a seriously higher energy storage density after 10 years.

Smart curved narrow 8K OLED monitors with a frequency of 240 Hz

I think many have seen huge translucent touch-screen monitors in science fiction films throughout the scene. So - either this is a very, very distant future, or a fantasy: as practice has shown, the world is not yet ready for touch control. So, three years ago, Microsoft introduced the huge 28 ”touchscreen candy bar Surface Studio - and, as we already know, it did not take off. Like several decades ago, by far the most convenient and accurate tool for working with a PC is a mouse, and by no means a finger. And we are already silent about the translucency, which will at least interfere with watching movies because of objects behind the screen, and a couple of hundred inches of diagonals, which, to put it mildly, will not be comfortable to work with.

So, if you return from heaven to earth and again look at the aspirations of the market, then the picture of the monitors of the future looms quite clearly. Of course, there will be no IPS and MVA, and even more so TN - only OLED with true black color, because it is difficult to achieve no glare on a 30 ”matrix, and the image quality requirements are constantly growing. Moreover, only this technology is capable of providing honest response times of a few milliseconds - alas, even with the best TFTs it is closer to 10 ms, which is a lot for high-frequency matrices.

It is also obvious that the refresh rate of displays will increase, because now even 100 fps few people can surprise (except, perhaps, players in Control on consoles). At the same time, a figure above 240 Hz already seems illogical, since very few people are able to recognize more than this number of frames per second. Of course, already now there are matrices with a refresh rate of 300 Hz, but most likely they won’t get much distribution: so, perhaps, everyone sees the difference between 60 and 120 Hz. With 120 and 240 Hz, it’s already much more complicated - you need to look closely and compare directly, so that 300 Hz and higher looks like pure marketing.

As for resolution, we still live in the world of FHD, where 4K monitors and content for them are still a rarity. Of course, there are already 8K TVs, but there are practically no films with such a resolution, and not a single video card will provide acceptable performance in games in it. So in the next few years 4K will be popularized, and in 10 years, quite logically, the turn will come and 8K.

But the physical dimensions will most likely hardly grow, and the reason is only in ourselves: we rarely sit farther than a meter from the monitor, so a diagonal of more than 50-55 ”at such a distance becomes corny redundant. At the same time, we most likely will not see the usual 16: 9: more and more narrow monitors with an aspect ratio of 21: 9 or even 32: 9 are on sale, and this is understandable - more information is placed on them, for example, you can display 2- 3 full windows. In addition, the games on them look amazing, especially if the matrix is ​​curved.

Well, of course, monitors will become smart: they already know how to display a virtual sight or an FPS counter, but it’s quite obvious that their functionality will expand in the future: for example, there may be an automatic selection of a color profile when entering a specific application, automatic brightness adjustment depending from what is happening on the screen and so on.

And put it all together, we get the following picture: top monitors in ten years will be curved and narrow, stuffed with various smart chips and have a resolution of 8K and fast 240 Hz OLED matrices.

Thin, powerful and standalone gaming laptops

Funny trend: smartphones are getting bigger and gaming laptops are getting smaller. And this is easily explained: who will refuse an ultrabook, which when connected to charging is easily able to replace a powerful home computer in games or work?

So manufacturers will continue to move in this direction, and Asus Zephyrus is only the first step in this direction. Obviously, more efficient coolers will be created, evaporation chambers and liquid metal will be used instead of thermal paste - after all, even now you can find a laptop with RTX 2080 weighing less than 2 kg, so it's nice to imagine what awaits us in the future.

At the same time, of course, the battery life of such devices is constantly growing: a discrete graphics card has been able to turn off for a long time when unnecessary, and the processors have become so energy efficient that there are already gaming ultrabooks with 6-core CPUs that can last 5-6 hours when surfing the Internet . In principle, this is already a good indicator, but there is a good chance that in ten years it will increase at least one and a half times.

But the game "coffins" weighing 5 kilograms and five centimeters thick are likely to disappear: for example, they now provide only 10-15% performance gains in games compared to thinner solutions, which hardly covers the difference in their inconvenience use in our increasingly mobile world.

Apple MacBook will switch to ARM

The Cupertino-based company is focusing more and more on merging mobile iOS and desktop macOS. So, now for the latter you can optimize the software from the iPad, and for the latter, in turn, more and more full-fledged applications are released. So it is quite possible that in a few years, when there will already be a lot of ARM-compatible software in the App Store, Apple will release the first laptop on this architecture.

There are enough reasons for Apple to make such a gadget - basically this is a serious increase in energy efficiency and ARM performance in recent years. So, the current A12X runs at least no slower than the Intel Core m in the 12 ”MacBook, while consuming much less power, including in standby mode. But such a Mac is quite enough for surfing the Internet, watching movies, editing documents and simply processing photos and videos - but does the average PC user need something else?

Of course, the current ARM is still far from the top-end Intel Core i7, but the key word is “bye”: if over the past 10 years x86-based solutions have become 2-3 times faster than power, then the CPU on ARM has increased the performance by an order of magnitude , or even two, and are not going to slow down. So it’s quite possible that in ten years they will become equal, and the transfer of the MacBook Pro to ARM will be painless in terms of performance, but many people will notice the seriously increased battery life.

Smart wireless headphones of all kinds

Alas, wired headphones are gradually becoming a thing of the past: fewer smartphones are equipped with a 3.5 mm jack, and it is quite possible that in a few years the first laptop will appear on the market without it. Obviously, the digital Lightning and USB-C that came to replace him did not suit anyone - you can’t connect headphones with them to other gadgets, and they occupy a single connector, and there are not enough models. As a result, there is only one choice - the use of Bluetooth.

The reasons for this are understandable: modern “blue-tooth” codecs such as AptX HD or LDAC provide excellent sound quality at low latencies, and taking into account the rather weak DACs in most smartphones and laptops, wireless headphones will sound even better than wired ones with them. Also, do not forget about the absence of a wire that no longer interferes and does not accidentally pull out.

What will change in ten years? Basically, apparently, it’s worth waiting for quantitative changes rather than qualitative ones: for example, the battery life of TWS headphones will increase, which now rarely exceeds 4-5 hours. Codecs will improve, weight will decrease, charging will accelerate - in general, the expected changes.

The only thing that can really be significant progress is in smart functions. So, now the best headphones have a couple of noise cancellation settings (such as amplifying people's voices) and the simplest touch options like “removed the earphone - the music is paused”. Of course, you can improve it ... yes, as you like: for example, integrate directional microphones into the headphones, which will amplify the voice of only a certain person, your friend or guide, while completely cutting off the voices of other people around you. You can add an intelligent volume control that will analyze the sounds around you and mute the music when you call you - in general, the scope for imagination is simply huge.

Gaming PCs will become the lot of geeks

As we wrote above, our world is becoming more and more mobile: thin powerful laptops come out, streaming game services develop, portable consoles appear - all this makes PC sales fall from year to year.

Particularly great emphasis is now being placed on streaming services: for example, Google promises 4k quality in its Stadia for literally ten dollars a month - you must admit that not all houses have a gaming PC with sufficient performance for a comfortable game in this resolution, but home Internet with speeds of 40-50 Mbit / s, even in Russia, are not so rare. Therefore, the question "why build a PC and upgrade it once every couple of years, if there is a Stadia" will be asked more and more users.

Portable gaming laptops also “bite off” their share of PCs: why have two devices, a computer for games and an ultrabook for trips, if they combine perfectly into one? So there’s a great chance that in ten years gaming PCs will become the lot of rich geeks - those who want to play not in 4K60 on a streaming service or on a laptop, but in 8K240 and will be ready to pay big money for it.

The next generation of consoles may be the last

For exactly the same reason, there is a chance that the new Xbox and PlayStation 5 will be the last full-fledged game consoles: I think many console gamers noticed that at the end of the life cycle of each generation of consoles optimization problems begin. This is expected: for 6-7 years the progress of desktop CPUs and GPUs is very significant, but the consoles do not change, which forces developers to seriously lower both the resolution and graphics in games for them, and sometimes this does not save from FPS drawdowns.

The solution here is simple - why not make cloud gaming? It will be enough for the user to buy a special box of dollars for one hundred, which will receive an incoming data stream from the server, process it and output it to the TV, and send commands from the joystick back. And, obviously, this is much easier to do than in the case of a PC, where there is a huge spread in components and controls.

In addition, games in the cloud will allow you to forget about the deterioration of picture quality over time: indeed, everything will depend only on the performance of the servers "on the other side of the screen", and, obviously, nothing prevents the same Sony or Microsoft from constantly increasing their power.

Tablets will become niche products

There are enough manufacturers trying to impose a sensory future on the world: this is Apple in 2010 with their iPad, and Microsoft in 2012 with Surface, and Google with a bunch of tablets five years ago.

The result is somewhat sad: there are practically no more powerful Android tablets, and Surface occupy a very small market share and have already become niche. All that remains is the iPad, which Apple is trying hard to keep afloat, making them computers, but their sales have begun to fall.

The reason for this is simple: the touchpad and mouse are more comfortable than a finger. It’s easier to control the cursor than poke your finger on the screen. Sensory future remained only in the films.

But it is worth remembering that laptops are becoming more mobile, and smartphones - more and more, and the release of flexible gadgets is just around the corner. So there is only one thing left for tablets: to move on a diagonal of 10-12 ”so that, on the one hand, they do not compete with flexible smartphones, and on the other hand, to be more compact than even the thinnest and lightest laptops.

But even in this case, they will be in an extremely vulnerable position: the laptop will still be more convenient to use, and the smartphone will be more compact, so tablets will most likely have a niche working application, for example, in education, to display large tables or A4 sheets without reduce their size, but do not take up much space on the desk, like laptops. As for the future of their user application, it is just very, very foggy.

Power Bank will not be needed

External batteries are an example of a product that has been actively developed over the past ten years. The reasons for this, I think, we all know: the days when the PDAs lived for 2-3 days have long passed, and now we are glad that the smartphone’s battery lasts until the evening. Of course, many intellectual features appeared such as energy-saving modes, so that the precious device lasted at least another hour, but, obviously, Power Bank is much more reliable and efficient in this regard.

But still there are two technologies that may well “bury” external batteries: these are fast and wireless charging. From the first, in principle, everything is clear: even if you forgot to charge your smartphone in the evening, in the morning you literally “feed” it for 50% or even more in half an hour. Similarly during the day: while you are having lunch, your gadget can also be backed up by a dozen or two percent, which again will delay the moment it is turned off. As a result, this technology has become incredibly popular, which is absolutely logical: as they say, they knock the wedge out with a wedge, and the fastest discharge is most obviously compensated by an even faster charge.

Wireless charging here looks much more futuristic: in principle, you do not need wires and charging blocks, just put the gadget on the charging surface and voila, "feeding" has gone. And this is exactly what can be seriously developed in the future: for example, you can embed such surfaces in tables in a cafe, on the desktop at home or in the office: yes, anywhere, where there is a flat surface. And in principle, you will not need to carry wires and Power Bank with you - the smartphone will be able to charge almost anywhere.

USB-C will take over the world

Yes, that’s exactly what we were told in the year 2015, when Apple released the MacBook with only one such port, and the first flagships on Android began to appear with it. The result, I think, we all know: at the moment, USB-C has become the default connector only in most Android smartphones and Apple laptops. Of course, it can be found both on expensive motherboards and expensive laptops, but rather in single quantities and on the principle “if everyone does this, we will do it”. Alas, the peripherals with this new connector are still not enough, and it is expensive, so the adapters and docking stations from the "old" ports on USB-C do not even think about leaving the market.

The reason for this is understandable - legacy, in other words, we are all used to the good old USB 3.0 and HDMI, and not many people have the desire to change them, often paying a considerable amount (especially realizing that you still have to buy adapters at least to USB- A). However, it’s clear that USB Type C will eventually take over the world.

Why? Firstly, this is a unified charging for most laptops and ultrabooks: if in the case of smartphones everyone more or less agreed and began to use microUSB first, and then USB-C, then in the case of laptops, almost every manufacturer still uses its own a port that is completely incompatible with others. But via USB-C you can transfer as much as 100 watts - even a 15 ”MacBook Pro is enough for this, which is already a matter of simpler ultrabooks. Moreover, some manufacturers already understood this and began to use it, and we are not only talking about Dell or Apple, but even Xiaomi.

Secondly, multifunctionality: you can simultaneously connect a monitor, a keyboard, and a hard drive through one port, and all this will work. Alas, through the “regular” USB 3.0, you can’t basically get the picture in 4K. In addition, you can forget about the rule of three attempts (it is so much that you need to insert a USB flash drive into USB-A) - the two-sidedness allows you to use this connector blindly.

In addition, in a couple of years, the first devices with USB4 will appear, which will only have a USB-C form factor and speeds up to 40 Gb / s - 8 times faster than the usual USB 3.0, which will allow you to connect even fast SSDs via this connector - drives without loss of speed.

As a result, the port is too “tasty” to refuse it, so in ten years it will certainly become the default connector in most devices.

VR and AR headsets will become as popular as joysticks

Graphics development follows the same path as geometry developed: first we admired 2D, then, thanks to the fast enough development of video cards, pseudo-3D appeared: our brain is easy enough to fool by slipping flat objects at an angle to it, and it will perceive them in volume.

In the future, 3D glasses appeared, the purpose of which was the same: in various ways to deceive our eyes again by showing them slightly different pictures so that the brain combined them into a realistic 3D image.

But, obviously, everything strives for true 3D, when you can physically walk around the virtual world. And the first truly massive VR headsets were Oculus Rift and HTC Vive a few years ago: they have two monitors for each eye, as well as many sensors that allow you to correctly respond to your movements and turns of the head - in general, almost a real immersion. only physical interaction with the virtual world is not enough.

And, of course, there are many problems with the first headsets: not every video card can display a couple of pictures in 2K, and even with a high frame rate (at least 90 for a feeling of smoothness). It turns out that the current anti-aliasing methods that work perfectly on flat textures start to fail in volume, which makes the picture more realistic. Well, of course, the heating and weight of the helmets, as well as the not very high resolution of the monitors interferes (since they are very close to the eyes, an extremely high pixel density is required). And not a large number of full-fledged VR games completes the sad picture: yes, you can recall Skyrim and Fallout 4, but there are dozens of suitable projects on the strength, which is not enough, because of which not many users agree to give 500-700 dollars for such a unusual gadget.

Of course, all these problems will be solved: Wi-Fi 6 speeds are already enough to get rid of the wires. In the future, video cards will appear for which outputting the game to a pair of 4K screens from 120 Hz is not a problem. Of course, the cost will fall, which will lead to a surge in sales and the emergence of an increasing number of VR projects, and it is quite possible that in ten years the game on ordinary flat monitors will be remembered as atavism.

Do not forget the second development option: why spend a lot of resources to get a picture similar to the modern world, if you can trite to draw the necessary objects in our world? So augmented reality appeared, and the requirements for it are so low that not even the most top-end smartphones cope with the construction of AR-scenes.

Moreover, there are simply many ways to develop this technology: this is learning with the help of interactive objects, and interactive games. For example, at the June presentation, Apple showed Minecraft in AR - I think you should not tell how much children love this game: after all, in it you can both be outside the scene and move inside. And on your own - there are no game characters, you fulfill their role.

So it’s quite possible that in a couple of years the children will play the same Minecraft not sitting in a circle with tablets, but wandering around the site and rotating them in all directions. And after five, maybe ten years, there is a great chance that AR-headsets will be distributed to schoolchildren in developed countries.

IPv6 will become the most popular protocol

I think that most Internet users at times saw unusual combinations of numbers: for example, 95.213.153.203. These are the so-called IPv4 addresses, any Internet user and every site has them: for example, the above refers to iguides.

But there is one problem that was understood at the beginning of the zero: the total possible IPv4 addresses, obviously, 256 x 256 x 256 x 256 = 4,294,967,296. And there are already more than 7 billion people on Earth, and even more sites. Of course, various tricks such as NAT were invented, when several gray IPs could “hide” at once, but in general, by mid-2019, the problem became very serious: for example, many regulators issue IPv4 addresses in extremely small quantities and in order live queue, because of which prices for them on the black market have risen sharply.

Is there a solution to the problem? Of course, the so-called IPv6 addresses. They already look more complicated, for example, 2001: 0db8: 11a3: 09d7: 1f34: 8a2e: 07a0: 765d. Moreover, their total number is 2 ^ 128: only 38 digits will be required to record this number! Obviously, such a number of addresses will last for a very long time: even if the Earth's population grows by an order of magnitude, everyone will still get many billions of addresses.

Testing of this protocol began back in 2008, and in 2012 it was launched for ordinary users and sites. But, as expected, the entire Internet operates on the principle of “work - do not touch”: as long as there is enough IPv4 addresses, the development of IPv6 is rather sluggish, and at the moment only 14% of sites have such addresses.

However, given the fact that the issue of issuing IPv4 addresses has intensified recently - it is obvious that IPv6 will develop more and more, and it is logical to expect that in ten years it will become the dominant protocol, and IPv4 an atavism that has become obsolete.

Active deployment of 6G networks will begin

When GPRS appeared in the late 90s, it was considered a miracle: you could go online directly from your phone! EDGE, which appeared closer to the middle of the zero ones, was also received with warmth - there are still enough places in the world where Internet access is possible only thanks to this protocol with a speed of often only tens of kilobits per second, so you can forget about pictures and even more so video.

But the technology did not stand still, and at the end of the zero, users got 3G with a tolerable ping of a hundred milliseconds and a speed sufficient to watch at least 360p video - and this was a real breakthrough, there were a sufficient number of users accessing the Internet only with 3G modems.

Well, talking about LTE, I think it makes no sense: in most cities of Russia it is available, often allowing you to watch even 4K video online, moreover, thanks to a ping of several tens of milliseconds, it is also suitable for online games!

The new protocol, called 5G, has just begun to be used commercially (although some small countries, such as Monaco, already have 100% coverage), and, in general, it has made quantitative changes: for example, the download speed has grown to several gigabits per second, and now several thousand devices can “cling” to one tower, so that there will no longer be subsidence of Internet speed in stadiums or in other crowded places.

But what's next? It is obvious that within 3-5 years this protocol will finally be “finished”, it will appear in most countries of the world and smartphones with 5G support will be sold for a couple hundred dollars. It would seem - everything, nowhere to grow further?

Of course, there is where: of course, the 6G standard has not yet been approved, but this does not mean that it is not being developed. It is assumed that it will operate in the range of several hundred gigahertz, which allows reaching speeds of terabits per second. Moreover, taking into account the fact that the signal at such a frequency is perfectly extinguished even by hand, smartphones will have dozens or even hundreds of small antennas throughout the body and even in the display.

In addition, there is a chance that 6G will finally kill Wi-Fi: the range of a base station with such frequencies will hardly exceed several tens of meters, which is comparable to the coverage of a regular home router, so that you do not have two similar technologies together, Wi-Fi may leave the market.

But, of course, these plans are not even for a year or for five years - most likely, only after ten years, when stable 5G coverage appears around the world, they will accept the specifications and begin to deploy 6G networks.

Satellite Internet will become familiar

It is easy to talk about LTE and 5G, living in a large city in Russia or Europe. But it is worth remembering that there are enough places on Earth where the only way to get at least some Internet is to use a base station for exchanging data with satellites. Satellite Internet is now used on ships, and in remote regions of Russia, and on various islands of the Pacific Ocean - and even having banally drove a hundred kilometers from his hometown there is a chance to see the “No Network” sign on his smartphone.

Now satellite Internet can hardly be called cheap and fast: real speeds rarely exceed a dozen megabits per second, prices only start at a couple of thousand a month, and you can’t expect honest unlimited: often after a dozen or two downloaded gigabytes, the speed will “cut”. Moreover, taking into account the fact that many satellites fly at altitudes of tens of thousands of kilometers, you don’t have to wait for a ping for less than half a second. So it’s obvious why such an Internet is still not very popular for ordinary users.

But things are changing for the better: at least two global satellite Internet systems, OneWeb and Starlink, are being developed and tested. Moreover, the number of satellites in them will exceed several hundred or even thousands, that is, coverage will be throughout the earth. In addition, their flight altitudes will not exceed one and a half thousand kilometers, and in some cases it will be only hundreds of kilometers, so it is quite possible to get an acceptable delay below hundreds of milliseconds and speeds of several tens of megabits. By the way, both companies promise that the cost of terminals for connecting to satellite Internet will not exceed a couple of hundred dollars, and the traffic will be unlimited - in fact, this can make this connection popular among ordinary users in remote corners of the globe.

Of course, it will take years to launch and deploy a network of hundreds of satellites; moreover, many points will have to be built on Earth to “land” traffic. As a result, you should not expect affordable satellite Internet earlier than after 5 years, but the chance that in ten years you can watch YouTube in 4K from the Siberian taiga in an embrace with a bear is quite large.

Smart homes will stop surprising

I think many people saw in fantastic films how people of the future get into the house simply by putting a finger on the door handle, after which the kettle starts to boil, the light turns on and the female voice wishes you a pleasant evening and begins to read out the news.

Of course, this is far from fiction: the same security systems that call the owner or special services on occasion are far from a dozen years old. And the opportunity to get a hot water bath right from your smartphone even before you get into the apartment is also not very surprising.

But, of course, the smart home, like any other relatively fresh technology, has enough problems. Firstly, this is a high entry threshold: to work with a good half of the protocols you need to be able to code, and the other half will not give much integration. Secondly, this is the cost: the simplest smart switches can already cost a thousand rubles, and a full set for a two-room apartment with a hub easily goes over fifty thousand. Thirdly, for full integration, you need to think through a smart home at the repair stage.

As a result, there is nothing surprising in the fact that we still click the switch if we need light - but, of course, everything is actively changing for the better: there are more and more companies ready to make a turnkey smart home, only a couple of protocols are actively developing, and they go towards compatibility with each other, so in a few years you won’t have to think whether this smart lamp will work with the whole system or not. Well, of course, the cost will fall, mainly due to the entry of Chinese companies on this market, offering significantly lower prices.

As a result, there is a great chance that in ten years the smart home will become quite popular and will be found in one form or another in many apartments - but, of course, for the full integration and integration of them into a common house system, it will still take much more time, especially in Russia.

4G networks will be massively disconnected

History shows that some time after the advent of new technology, the technology that it replaces leaves the market. So, parallel and serial ports at the beginning of zero replaced USB. Instead of the 30 pin, the vast majority of Apple mobile users have long been Lightning.

A similar situation occurs with mobile networks: think about it, because over the past few years we are used to considering the EDGE icon on the smartphone screen as the absence of the Internet, and we rarely see it. Moreover, in Japan, with their high population density, the number of 5G and LTE towers is already so large that the operators completely turned off 4G: less than 1% of subscribers used it corny, which made their operation simply unprofitable.

So there is nothing surprising in the fact that operators around the world after a while will begin to turn off 4G networks: first in large cities with good coverage of new communication standards, and then in the regions. In addition, most likely in a few years the first smartphone will appear, incompatible with 4G: this will at least simplify the antennas, which are already too many with the advent of 5G, and it will be easier for the modem to choose a network.

Of course, the whole process will drag on for many years - after all, only 4G is able to cover tens of square kilometers with one tower, which is very important for extended countries like Russia or even the USA. But it can be assumed that in the largest cities in the world in ten years the Nokia 3310 will finally cease to work.

Wired connections become rare

Our world is becoming more and more wireless: remember when you last updated the system on iPhone using iTunes, or you used a USB flash drive to transfer data. And did you even connect the laptop to the network with an Ethernet cable?

With the development of fast wireless Internet, a wired connection has ceased to be necessary: ​​indeed, why upload a movie to your smartphone from a computer if you can watch it online via LTE? Why update the system and applications using a computer if Wi-Fi can often do this as quickly and much more conveniently? And I'm already silent about the fact that uploading heavy files to the cloud with sharing links in social networks has long replaced flash drives.

And given the fact that wireless Internet is becoming faster and that at 5G speeds a movie in FHD quality will only download a couple of minutes, the question of the need for wired connections disappears by itself.

Of course, there will be situations when we will use flash drives and wires: for example, hardly in a decade, laptops will massively use wireless charging, and the same iPhones require a physical connection with low-level firmware via DFU. But for the most part it seems quite reasonable that in a decade we will forget what external hard drives are, as we already forgot about what drives are.

SSDs will replace hard drives

Hard drives are one of the oldest devices for storing computer data, perhaps older than only punched cards. So there is nothing surprising in the fact that in the 90s they began to be actively used in user PCs, and still a huge number of laptops and computers are equipped with them.

But progress does not stop: we are all used to the almost instantaneous response of smartphones at the end of the zero, so why should we wait a dozen seconds for a laptop with HDD to go out of sleep? In addition, ultra-thin laptops began to actively develop at that time, putting it into a hard drive was difficult.

That is how the expansion of SSDs into user computers began: if in the mid-zero they had scanty volumes at a huge price, which made their purchase a lot of geeks, then in the last 5 years they have become the norm even in not the most expensive devices, since the price per gigabyte has dropped to affordable, and even the simplest solid-state drive seriously improves PC responsiveness.

At the same time, the reliability of SSDs has grown significantly: now even inexpensive drives with cheap memory can often withstand over 100 TB of rewriting, which is enough for an average user for at least a dozen years - not all hard drives survive to such a solid period.

Well, do not forget that prices for SSDs continue to fall, and the simplest solutions for 120 GB can be found only a little more than 1000 rubles, which allows you to put them in the cheapest PCs and laptops. And taking into account the fact that the price reduction will continue, modern systems require more and more fast drives and moreover, cloud storage is actively developing, it is logical to assume that in ten years hard drives will be used only by geeks to store huge amounts of information who don’t need a high speed of reading or writing (for example, movies), and ordinary users will use only SSD in bulk.

Ray tracing in games will become familiar technology

Not everyone knows, but mathematically the technology of ray tracing in space for computers was described as far back as the 80s, but after that it was put aside in a long box. It is understandable - for the correct construction of shadows and reflections it is often necessary to process it under tens of millions of rays per second: of course, the video cards of those years were not capable of this.

With the development of computer graphics, ray tracing began to be used to obtain realistic images in films, and even then it became clear that this was an extremely time-consuming process: for example, rendering only one frame of Transformers often took several tens of hours! Of course, it is not necessary to exclude the fact that this time was spent not only on rendering rays, but still this process was eating away the lion's share from it.

As a result, only a year ago, Nvidia introduced video cards of the RTX family that have hardware blocks for ray tracing - and as a result, it turned out that with this technology a third of the frame was processed at best, and it was used in different games to calculate one thing: somewhere it’s a shadow, somewhere it’s a reflection. Moreover, this work was given to new video cards extremely difficult, the number of frames per second when activating the new technology sometimes fell by two, or even three times. Moreover, attempts to run games with “RTX on” on previous-generation video cards often led to complete unplayability: I think few people would agree to run Metro Exodus on a GTX 1080 Ti with 15-20 fps, and not in 4K, but in regular FHD.

And if you recall that in Quake 2 RTX Edition, where you can use honest tracing for the entire frame at the same time for shadows and for reflections, the RTX 2080 Ti chokes even at 1080p - it is not surprising that a sufficient number of users scold this technology and consider a huge drop in productivity too high a fee for a rather mild improvement in picture quality.

But, I think, few people doubt that ray tracing is the future. Of course, RTX 2000 are only a touchstone, and game engines are still poorly able to work with this technology. And taking into account the fact that new consoles will be able to work with RT at the hardware level, and Nvidia in the new generation of video cards will most likely take into account all the errors and seriously increase productivity, there is a chance that in 5 years the first game will appear, which will be mandatory require a video card in the system that supports this technology. Well, in about ten years, games that will not be able to work with her are likely to be viewed as archaisms.

The first hybrid server quantum computers will appear

I think it’s not a secret for anyone that the cycle of silicon processors is approaching its logical conclusion: starting more than 50 years ago, CPUs often then doubled their power once every couple of years, and now we are pleased with the increase in performance at best by 10-15% per generation. And you should not expect an improvement in the situation, because every new technological process is taken up with a fight: for example, five years ago Intel promised 7 nm last year, and as a result, its 10 nm processors have just entered the market. So further it will only get worse, and, of course, scientists and engineers are looking for a way out of this situation.

Of course, you can replace silicon with another semiconductor, but this will only delay the inevitable for several years. Therefore, you need to move on to something completely new - for example, to quantum computers.

They are qualitatively different from ordinary PCs in that they do not work with bits that have a fixed value of 0 or 1, but with qubits that have both 0 and 1 values. This allows, for example, to instantly solve password problems - for hacking The PIN-code on your iPhone will need only 4-6 qubits: taking into account the fact that modern quantum computers already operate in dozens of qubits, your data is in danger (of course not).

Of course, the commercial use of quantum computers is far from one year old: there are still problems with errors in the calculations, it is necessary to create certain working conditions and to recreate the whole system again after each calculation. In addition, obviously, ordinary programming languages ​​on such processors do not work, so you need to write new ones.

It is difficult to say how long it will take, but let's look at the progress over the past ten years: if in 2009 the best quantum computer had only 2 qubits, then in 2018 Google increased this number by as much as 36 times, while reducing the number of errors in calculations of "total" up to 1%.

So there is a chance that in about ten years the first server hybrid quantum processors will appear: they will have a regular silicon CPU, but at the same time a special quantum block will be connected for some types of calculations. Of course, at first they will cost just space money, but in some cases they will be considered many orders of magnitude faster than the fastest “silicon”.

Upgradeable laptops will be wonders

The miniaturization of technology, obviously, has not only positive aspects in the form of reduced weight or size - the flip side of the coin is a decrease in the ability to independently upgrade.

Remember the laptops of the beginning of the zero: yes, they were usually suitcases, which were not very easy to carry every day. But everything often changed in them: the dead Celeron could be replaced with a dual-threaded Pentium, which in places doubled the performance. Soldered RAM? What are you talking about, at least a couple of slots was in almost all laptops of that time, and we are silent about the fact that replacing a drive with a more capacious one was an absolutely standard process.

Now let's take a look at a modern MacBook. The processor is soldered - alas, in the last 7 years this has become the norm. RAM is also soldered and does not change: it is already less familiar, ultrabook developers took this trend only a few years ago. But an unchanging SSD raises a lot of questions. At the same time, we think that it is not necessary to say that thanks to the Apple T2 chip, most of the serious repairs can now be carried out only in the authorized service centers of the company, which affects the price by no means beneficial.

Of course, the MacBook is so far the exception in terms of the impossibility of upgrading, but it is clearly visible that an increasing number of laptop manufacturers are following Apple: for example, over the past couple of years, the popular ultrabooks from Dell, Asus and Xiaomi have lost the opportunity to increase the amount of RAM - you have to immediately take the model with a large amount of memory, for which companies usually require a lot of money.

So it will not be surprising if, after a few years, ultrabooks on Windows lose the ability to replace the drive, thereby becoming completely disposable devices, and laptops that are becoming more and more compact will eventually follow them. And after about ten years an ultrabook, in which it will be possible to replace the SSD or increase RAM, may well have a chance to become a rare curiosity.

Smart lenses and glasses will become popular

Again, such devices you know more from films with James Bond or with Tony Stark: gadgets that allow us to show our eyes more than we see ourselves are invaluable in their work. And five years ago, Google decided to make science fiction a reality by introducing its Glass smart glasses.

The idea, of course, was as cool as possible: a small screen that absolutely did not obstruct the view and was able to display almost any textual and visual information. Built-in camera that allows you to take photos without taking your smartphone out of your pocket. Managing with eye tracking is, in general, a real gadget from the future.

But, as we already know, such a device did not take off, and there are many reasons: this is a ridiculous battery life, just a couple of hours, and fast overheating when launching heavy applications, and low reliability (one drop and you can go for a new one), and price of $ 1,500 - recall that it was 5 years ago when the iPhone cost half as much.

But what do we see? Most of the problems are purely quantitative: low battery life? You can increase the battery capacity or use a more energy-efficient CPU, since there are no problems with them now, besides, this will solve the problem of overheating. And the cost will now be much lower, because in fact, electronics comparable in capabilities is found in various smart watches and bracelets, which often cost an order of magnitude less than Google Glass.

So there is nothing surprising in the fact that companies continue to develop this area, and in a more interesting field - namely, they develop smart contact lenses. So, modern technologies already allow flexible batteries and silicon chips to be embedded in soft lenses with a thickness of only a couple of hundred micrometers - engineers from the leading French technological university IMT Atlantique proved six months ago. And there are a lot of development opportunities here: it is controlling the sugar level in the cornea, which diabetics will appreciate, and displaying important information from a synchronized smartphone, and even taking photos after blinking (Samsung is doing this now) - and, most importantly, the price tag for such lenses will be significantly lower than on Google Glass.

Of course, while this is still a dream - for example, the French want to start clinical trials only next year - but given the fact that more and more people have poor eyesight and are forced to wear lenses, many of them will not refuse to buy a smart gadget instead, so which is logical to assume that in ten years, smart lenses will become quite popular.

Neural networks will surround us everywhere

The artificial neural networks of the last few years have been heard by everyone, and this is not surprising - what they do can be described in just one word: magic. Want to see yourself in old age? You are welcome. Have a desire to undress Scarlett Johansson? There is nothing easier. Create a landscape with three strokes of paint? Easy.

But, of course, in addition to pampering, neural networks are capable of a lot of things: for example, now many chemical elements are first theoretically calculated by supercomputers that check their properties, and if they are special, then scientists will get down to business. But for the most part, the connections obtained are absolutely useless - and neural networks can very well reduce their number, increasing work efficiency. Players in financial markets are beginning to use them more and more: “pondering” the exchange rate for the past year, they are able to predict the course for tomorrow. They are already being used to get better translations: I think many have noticed how Google Translate has seriously improved the readability and accuracy of translated texts over the past few years.

And these are just some of the possible applications, in practice their capabilities are close to limitless. Therefore, obviously, they will be used more and more actively and completely in completely different areas, ranging from agriculture and ending with image analysis, so that the next decade may well pass under the auspices of “we introduce neural networks wherever possible and impossible”.

Electronic SIM cards will replace conventional

It's funny, but miniaturization has touched even SIM cards, which are already quite small - with the possible exception of the original SIM format: do you think the card the size of a credit card from which you cracked micro or nano SIM was made for convenience? Whatever the case, this is the original SIM standard, which almost ceased to be used near the end of the nineties.

But, on the whole, the reduction after mini SIM is already just for the sake of reduction itself (to the delight of the employees of computer shops who cut SIM cards for money) - hardly an extra couple of cubic millimeters inside the smartphone affected anything. However, the last few years ago, qualitative changes began, akin to the appearance of physical SIM cards in the 90s: of course, we are talking about eSIM.

What it is? In fact, this is the same chip, but soldered on the board inside the gadget. It performs exactly the same functions, has a personal identification number and, by and large, can work without problems in most modern cellular networks. Then what is its plus? The fact is that if a physical SIM is tightly bound to one operator, then eSIM can store at least a dozen operators in its memory, and you are free to switch between them as you like.

And this is a really cool innovation: remember how you bought SIM cards of local operators at the airport in order not to give away hundreds of rubles a day for calls while roaming? Now it's all in the past - you just need to select the necessary operator in the settings in a couple of clicks. But, of course, in practice, everything is somewhat more complicated.

Firstly, these are problems with the legislation in some countries, including Russia - nevertheless, laws were written in relation to physical sim cards. Fortunately, governments are mainly meeting users, so within a few years, most likely, in most countries there will be no problems using eSIM.

Secondly, it all depends on the operators, who must write special settings for electronic SIM-cards, and in some cases refine the cellular network. This is also not so simple: in Russia, for example, only Tele2 is fully prepared and works with eSIM. Of course, as soon as the official permission to use eSIM appears, most of the operators still catch up, but again it can take up to several years.

But if you take the prospect for a full ten years ahead, then everything is clear: eSIM will dominate, and most likely there will be no connector for ordinary SIM cards in smartphones, and they will look at them as they are now on CD-ROMs.

No comments:

Powered by Blogger.