Latency is important to carriers and property owners

Unacceptable latency is when data processing is too far from the “Edge” where the data originates

Connected Edge Partners (CEP) understands the importance latency plays in advanced telecommunications and that unacceptable latency is, many times, difficult to resolve as technology simply cannot "fix" the speed of light.  We solve the problem of poor latency by enabling data processing capabilities at the “Edge” (e.g. in commercial buildings and retail centers) thereby reducing distances that can be avoided.  In this process we increase our property owner partners' real estate valuations and make them more competitive regarding the seeking and retaining of quality tenants because, through our partnering, they can now successfully address the low-latency requirements tenants have today, with latency-reducing data processing and computing capabilities in their buildings that are now physically located closer to the “Edge”.


What is Latency?

While computing power and network bandwidth has improved a thousandfold in the past few decades (along with lowered pricing), one thing that has not  improved much is network latency. Network latency is the factor of distance, transmission medium, network repeaters and routers, network processing, and, importantly, the speed of light. All this takes milliseconds to travel and in today's business world this matters because, in our emerging digital society, with its constant looking for potential "business moments", it matters.  For example, in online retailing for every 100 ms in delay there is a 1% loss of sales and, in online searching, an extra 500 ms delay in search results can cause a 20% drop in traffic and revenue.


Latency is the round-trip delay represented by the duration of time the originated data packet is transmitted until the acknowledgment is received back at the originating point in the transmission.  Latency is a direct consequence of the speed of the supported physical transmission medium (i.e., guided as in fiber optic or copper cable, unguided as in air) and the distance traveled and there are specific network topologies that can exacerbate response times back to the end-users, based on these physical constraints.  As the amount of applications being introduced into networks increases dramatically and the demand for reliable communications also increases, latency sensitivity is becoming an ever-growing concern for our nation's telecommunications carriers  In communications, the lower limit of latency is determined by the distance and medium being used for communications. This can be a concern as latency impacts reliable two-way communication systems and limits the maximum rate that information can be transmitted, as there is often a limit on the amount of information that is "in-flight" at any one moment. And, in the field of human–machine interaction (such as a cell phone application or gaming), perceptible latency has a strong effect on user satisfaction and usability.


In a packet data switched network, latency is measured either one-way, as the time from the source sending a packet to the destination receiving it, or as a round-trip delay time being the one-way latency from source to destination, plus the one-way latency from the destination back to the source.  Round-trip latency is most often quoted, because it can be measured from a single point. Note that round-trip latency excludes the amount of time that a destination system or equipment spends processing the packet.  However, in a typical network, packeted data is forwarded over many links via many gateways, each of which will not begin to forward the packet until it has been completely received.  In such a network, the minimal latency is the sum of the minimum latency of each link, plus the transmission delay of each link except the final one, plus the forwarding latency of each gateway. In practice, this latency is further delayed by queuing and processing, which queuing delays occur when a gateway receives multiple packets from different sources heading towards the same destination.  Since typically only one packet can be transmitted at a time, some of the packets must queue for transmission, incurring additional delays.  Processing delays are incurred while a gateway determines what to do with a newly received packet.  The combination of propagation, serialization, queuing, and processing delays often produces a complex and variable network latency profile.  Latency limits the total of quality throughput in reliable two-way communications.


Fiber optic latency 

Fiber optic latency is largely a function of the speed of light, which is 299,792,458 meters/second in a vacuum.  This would equate to a latency of 3.33 µs for every kilometer of path length.  The refraction of most fiber optic cables is about 1.5, meaning that light travels about 1.5 times as fast in a vacuum as it does in the cable.  This works out to about 5.0 µs of latency for every kilometer.  A key problem with fiber optic backhaul latency is the varying and too long of distances traveled by the fiber optic cable to get from the end-user source to the POP of the carrier providing the backhaul service, which can be many miles away.  These distance issues are compounded by the fact that the cable is never installed in a straight line, since it has to traverse geographic contours and obstacles, such as roads and railroad tracks, as well as those found in other types of rights-of-way.  As a result, and due to imperfections in the fiber, these distances cause light to degrade as it is transmitted through it.  For distances of greater than 100 kilometers, either amplifiers or higher latency regenerators need to be deployed.  Passive amplifiers typically add less latency than regenerators, at the cost of compounding attenuation, though in both cases it can be highly variable.


The problem is, in today's broadband networks, the physical distance between the end user and data center can be hundreds or thousands of miles away, leading to way too much latency.  To address this, greater efficiencies and reliability and shorter network distances are needed.  CEP’s enabled Colo, located in its partner property owner buildings, will be better in that it will successfully address Edge Computing’s previously discussed new-found dilemma of; the greater the distance between where data is created and the computing, analytics and storage of this data at the data center, the greater the negative impacts on this data, such as functionality of the computing and the ability to process this data at the speeds required.


Latency and Wall Street

With respect to financial trading, The Tabb Group states, "… the value of time for a trading desk is decidedly nonlinear … if a broker's electronic trading platform is 5 ms behind the competition, it could lose at least 1% of its flow; that's $4 million in revenue per millisecond.  Up to 10 ms of latency could result in a 10% drop in revenue. From there it gets worse."  So, minimizing latency is of great interest to Wall Street and to the capital markets where algorithmic trading is used to process market updates and turn around orders within milliseconds.  


Low latency trading refers to the network connections used by financial institutions to connect to stock exchanges and Electronic Communication Networks (ECNs) to execute financial transactions.  These transactions are impacted by latency when the three components of a trade are negatively impacted by 1) the time it takes for information to reach the trader, 2) the trader’s algorithms to analyze the information, and 3) the generated action to reach the exchange and get implemented.   Further, this can be contrasted with the way in which latencies are measured by many trading venues who use much narrower definitions, such as, the processing delay measured from the entry of the order (at the vendor’s computer) to the transmission of an acknowledgement (from the vendor’s computer).   With the spread of computerized trading, electronic trading now makes up 60% to 70% of the daily volume on the New York Stock Exchange and algorithmic trading close to 35%.  Trading, using computers, has developed to the point where millisecond improvements in network speeds offer huge competitive advantages for financial institutions.


Latency and the automobile

Milliseconds matter even in things such as a self-driving car traveling 70 miles per hour, when 100 ms equals 10 feet.  But when we have two self-driving cars, or two dozen all traveling toward the same location, 100 ms is an eternity.  A lot can happen in a few milliseconds — lives could be at risk.  So, milliseconds matter with people.  And, when using VR hardware, anything more than a five to 10 ms delay from head movement to view movement is noticeable and can cause motion sickness.  While much of this can be solved at the immersive interface itself, that's not true if the human is also interfacing with dozens or hundreds of dynamic things interactively, in real-time at the “Edge”. Real-time does not allow for 50, or 100 or 200 ms delays.  As one gets further away from the central core Cloud and its general-purpose capabilities, CEP’s “Edge” locations, in its property owner partnered buildings, will be tasked by its interconnecting tenants, competing carriers, 5G locations and backhaul providers to handle very specific roles for very specific purposes supporting the store, the business, the Smart City stoplight, the traveling nearby car.   In all this, the right choices at this “Edge” will determine what takes place in the core center Cloud, and not vice versa.