“90% of the Bitcoin network’s mining power is well-coordinated enough to show up together at the same conference…

Bitcoin’s blockchain is actually contolled by a single huge mining pool operating through a set of subsiduaries –  this pool is controlled by Bitmain.”

Source: DSHR’s Blog: “Sufficiently Decentralized”

Recycling and some older guidance…. I hope this helps someone.

TLDR;

  • When Should You Use REST?
    • Most used for building microservices-based infrastructures. 
      • Any time you plan to build an app or a larger computer system that requires connecting microservices, REST is the most common choice.
    • Best for externally-facing APIs. 
      • If you need standardized HTTP protocol, high-speed iteration, and multi-language microservices connected, then REST should be your main choice. 
    • Universal support with third-party tools, so are ideal for everything from apps to web services.
  • When Should You Use gRPC?
    • Best for building internal systems where tighter coupling is not an issue. 
    • Useful for connecting architectures that consist of lightweight microservices where the efficiency of message transmission in a multilingual environment is most important.
    • When real-time communication is required.  
    • When used over low-power, low-bandwidth networks
      • An IoT network would benefit more from gRPC than REST.

REST vs gRPC 

    • gRPC is a high-performance, binary, strongly-typed protocol using HTTP/2.
      • gRPC is a high-performance, open-source framework developed by Google for efficient communication between services using a binary protocol (Protocol Buffers) and HTTP/2
        • Protocol Buffers are the Interface Definition Language (IDL) used to describe service interfaces and payload message structures.
      • gRPC is based upon the RPC (Remote Procedure Call) paradigm
        • An RPC API request to delete a resource with the id of “2” might use the HTTP verb POST with a /deleteResource URL with query string of { “id”: 2 }   
      • gRPC supports bidirectional streaming since it uses HTTP/2.
      • gRPC APIs use their own Protoc Compiler which enables you to create your own code. 
        • Protoc Compilers work in multiple languages and can be used in polyglot environments (groups of microservices can run on separate platforms and be coded in different languages).
        • Protoc Compilers compile .proto files, which contain service and message definitions.
        • Protoc Compilers support the following languages:
    • REST is a simpler, stateless protocol using HTTP 1.1 with text-based JSON/XML messages
      • REST is a more established, text-based approach leveraging standard HTTP methods for building web APIs.
      • REST follows the architectural constraints of the Representational State Transfer model. 
        • Standard HTTP methods are used with Uniform Resource Identifiers (URIs) to communicate requests and responses between a client and a server.
        • Each URI describes a self-contained operation and contains all the information needed to satisfy the request.
        • A REST API request to delete a resource with the id of “2” could use an URL with the HTTP DELETE verb: DELETE /resource/2.
      • REST is limited to request-response communication patterns since it uses HTTP 1.1
Characteristic gRPC REST API
HTTP Protocol HTTP 2 HTTP 1.1
Messaging Format Protobuf (Protocol Buffers) – binary JSON (usually) or XML and others – text
Code Generation Native Protocol Compiler Third-Party Solutions Like Swagger
Communication Unary Client-Request or Bidirectional/Streaming Client-Request Only
Receiving Data 7 times faster than REST 7 times slower than gRPC
Sending Data 10 times faster than REST 10 times slower than gRPC
Implementation Time 45 Minutes 10 Minutes

Protocol Buffers vs XML/JSON

  • XML/JSON
    • Platform and language agnostic 
    • Messages are human-readable and communicate structured data 
  • Protocol Buffers 
    • Platform and language agnostic 
    • Not human readable but highly efficient
      • Serializes and deserializes structured data to communicate via binary
      • Uses a highly compressed format
      • Much faster – focuses strictly on serializing and deserializing data 
      • Reduced message sizes

HTTP 1.1 vs HTTP/2

  • HTTP 1.1
    • The standard for communication on the web. 
    • Relays information between a computer (client) and a web server (server), which may be local or remote. 
    • Client sends text-based request and a resource (web page, PDF, message, etc) is returned from the server. 
    • Does not support streaming – request/response only.
  • HTTP/2
    • Supported by most modern browsers in addition to HTTP 1.1.
    • HTTP/2 uses binary messages instead of plain text smaller packages, faster throughput).
    • HTTP/2 reduces network delay through the use of multiplexing (enables multiple requests to fire simultaneously on the same connection, receiving requests back in any order).
    • Supports 3 types of streaming:
      • Server-side (long running process on server over a single connection – server updates client with progress and final result):
        1. A client sends a request message to a server. 
        2. The server returns a stream of responses back to the client. 
        3. After completing the responses, the server sends a status message (and, in some cases, trailing metadata), which completes the process. 
        4. After receiving all of the responses, the client completes the process. 
      • Client-side (client sends multiple requests to server over a single connection, server sends back response when all requests are done): 
        1. A client sends a stream of request messages to a server. 
        2. The server returns one response back to the client. It (usually) sends the response after receiving all of the requests from the client and a status message (and sometimes trailing metadata). 
      • Bi-directional (chatty – controlled by the client): 
        1. A client and server transmit data to one another in no particular order. 
        2. The client is the one that initiates this kind of bidirectional streaming.
        3. The client ends the connection.

 


DevOps unites development and operations. DevOps is the practice of breaking up monolithic architecture and teams to create smaller, autonomous teams that can build, deliver, and run applications.

Platform Engineering (PE) focuses on abstracting out infrastructure or other things that distract DevOps teams from delivering their domain. PE is a fairly new buzzword/concept and is really just a subset of DevOps.

Site Reliability Engineering (SRE) focuses on helping DevOps and internal platform teams increase reliability, scalability and security.

DevOps vs SRE vs PE

  • DevOps focuses on the development side.
  • SRE focuses on the operations side.
  • PE focuses on internal development enablement and is really a part of DevOps.

SRE and Platform Engineering benefit from the three ways of DevOps:

  1. Concentration on increasing flow
  2. Tight feedback loops
  3. Continuous experimentation, learning and improvement

Role comparisons:

  • Infrastructure Engineer – Generic term for engineers who works on core infrastructure.
  • Cloud Engineer – Engineers who works on public cloud (AWS, Azure, GCP, etc).
  • SRE – Software engineers who focuses on application reliability, budgeting uptime, and toil automation. Three letter terms are their friends (SLO, SLA, SLI).
  • DevOps Engineer – Infrastructure engineers who focuses on reducing silo between development teams and infrastructure teams. NOTE: If your team has dedicated DevOps Engineers, your org isn’t really practicing DevOps.
  • Platform Engineer – Engineer who focuses on designing and building tools and workflows that enable self-service. An enabler of software engineering teams.