Today,Internet applications and transmissions have become popular.In addition to AWS,Microsoft,Google,and Alibaba,which also provide cloud services,actually provide a variety of physical data(hard disk,CD)transportation services.With the rapid growth of computing scale,the cost of information transfer will become higher and higher.
We also saw similar problems in the context of commuting,and we all familiar with traffic jams from the suburbs to the city centre. But if you introduce new ways of working,such as:working from home,remote collaboration,etc. Similar can be extended to solve the problem of data transmission congestion,distribute computing resources to the vicinity of the data source,allow at least one data pre-processing,or use some decentralized computing framework for data processing,directly on the device with the nearest data source.
Calculate to reduce the amount of data sent back to the data center and the data transfer time.As a result,cloud computing needs to move away from traditional centralized data centers and bring its services closer to users or have more distributed scheduling capabilities and distributed data sources.
Nowadays,it is facing a new turning point.The development of 5G technology has laid a technical foundation for the connection for all things.The data processing capability,the number of devices and the amount of data generated at the client or edge will be greatly improved.In the future data competition,the data is generated at the edge but has more bandwidth and time.The data is transmitted to the centralized computing cloud for processing and then fed back to the IOT device.This model compares the data at nearby technical nodes to calculate feedback. To the IOT device,it is obviously not dominant.
Gravity hopes to break this deadlock.In the field of shared computing,Gravity builds a power market on the blockchain through cross-platform heterogeneous scheduling,and standardizes heterogeneous computing power into VCU units.Through decentralized heterogeneous scheduling,the resources and business scenarios required by the task are created,and the computing network is contructed according to different geographical distributions and computing capabilities of different devices.
Gravity uses idle computing power to reduce computational power and bandwidth costs. Computational nodes are closer to the user edge than public clouds, and computing power is stronger than edge computing gateways.
The main users of Gravity are divided into computing providers and computing consumers. The power provider can provide personal idle resources (including PCs, free resources on public clouds, mining machines, mining pools, etc.), join the Gravity power resource market, and receive and run different types of computing tasks based on the different resource types provided. Through the power market, the power supply and demand are automatically matched, and the nodes contributing the power will automatically obtain the corresponding income.
Gravity has implemented a heterogeneous cross-platform scheduling network that can span detachable and stateful jobs, and containers across a heterogeneous scheduling network of different OS s, different CPU architectures, different types of devices (boxes, phones, PCs) The resource scheduling network complements each other.
Correspondingly, the computing power user can purchase the same cloud computing service as the public cloud provider in the Gravity computing market, but only pay the ultra-low price. You don’t need to care about the composition of the IaaS layer (possibly arm or server), and the price may be just 1/3 of the traditional public cloud service.
Gravity has provided cost-effective cloud services based on scattered idle computing resources, including cloud elastic service computing GEdge, function computing GFunction and big data computing service GPMR, and the underlying relies on VPC network, through ICE-based The protocol’s network service implements the connection between nodes.
GEdge: Gravity flexible scheduling platform, the power provider uses the client provided by Gravity to manage and launch nodes to provide computing power. The power management user can deploy the private image micro-service through the GEdge management page, perform real-time management and task monitoring through the GEdge management page, and provide all instances of ssh to the user, so that users can conveniently manage and quickly deploy services.
GFunction: Gravity function calculation, benchmark AWS Lambda, users can easily create functions, quickly create Serverless services, support languages such as python/nodejs/go/java/php, provide message queues, http triggers, provide automatic use cases Expansion.
GPMR: Gravity Edge MapReduce provides a heterogeneous MapReduce framework that can run on a variety of common edge gateways such as mobile phones and PCs. The compute nodes are connected by p2p mode, and the nodes are peer-to-peer. It is suitable for many nodes in the device, and the data can be divided into small fragments for calculation. Computational users can write their own MapReduce code, submit it on the data development platform, and get the calculation results.
The above three products can be combined according to actual business scenarios and requirements:
GEdge is suitable for long-running services (such as microservices, rendering services, etc.)
GFunction is suitable for developing serverless apis (eg web crawlers, model training, etc.)
GPMR is suitable for data that needs to be aggregated, and stateful computing tasks (such as GFunction’s output data)
Maybe you have other question about privacy.For example,if a user performs a model training on GEdge,we will consider whether the owner of this machine can access these files,but advanced users will look at the source code and gain access to the resources.