Microservice Patterns - 3.1 Interprocess communication in a microservice architecture - Overview of interprocess communication in a microservice architecture


Reference  

Overview of interprocess communication in a microservice architecture
Interaction styles
Dimension - 1
  • One-to-one—Each client request is processed by exactly one service
    • Request/response: client send request and wait for response
    • Asynchronous request/response: client send request to service without waiting, service will response later
    • One-way notifications: client send notification to service
  • One-to-many—Each request is processed by multiple services
Dimension - 2
  • Synchronous—The client expects a timely response from the service and might even block while it waits.
  • Asynchronous—The client doesn’t block, and the response, if any, isn’t necessarily sent immediately

USE SEMANTIC VERSIONING
  • MAJOR—When you make an incompatible change to the API
  • MINOR—When you make backward-compatible enhancements to the API
  • PATCH—When you make a backward-compatible bug fix

Robustness principle / Postel's law
Be conservative in what you do, be liberal in what you accept from others 
(often reworded as "Be conservative in what you send, be liberal in what you accept").

MAKING MAJOR, BREAKING CHANGES
Can let API gateway to translate old version API to new version API implementation
  1. /v1/… and  /v2/…
  2. In HTTP header, such as
    GET /orders/xyz HTTP/1.1
    Accept: application/vnd.example.resource+json; version=1 

Message formats
  1. use a cross-language message format.
  2. Ex. D NOT use Java serialization
  3. 2 main categories
    1. TEXT-BASED MESSAGE FORMATS. ex. JSON, XML
      1. Good: An advantage of these formats is that not only are they human readable, they’re self describing
      2. Bad:  
        1. the messages tend to be verbose. Ex: XML element name
        2. the overhead of parsing text, especially when messages are large
    2. BINARY MESSAGE FORMATS. Ex. Proto Buffer


沒有留言:

張貼留言

Lessons Learned While Benchmarking vLLM with GPU

Recently, I benchmarked vLLM on a GPU to better understand how much throughput can realistically be expected in an LLM serving setup. One ...