Til

Parsing http header with scary

정규식으로 http 분석 def processHTTP(data): str_method = "" str_uri = "" 정규표현식을 통해 넘어온 데이터에서 METHOD, URI, HTTP 버전 정보등으로 구분함 h = re.search("(?P<method>(^GET|^POST|^PUT|^DELETE)) (?P<uri>.+) (?P<version>.+)", data) if not h: return "Error" # 정규표현식에 해당하는 데이터가 없는 경우 Error 를 리턴해줌 # method 로 정의된 부준은 str_method 에 저장 if h.group("method"): str_method = h.group("method") # URI 데이터는 str_uri 에 저장 if h.group("uri"): str_uri = h.group("uri") return str_method,str_uri # method 와 uri 를 리턴해 줌 출처 : http://www.

Google's Load Balancer

Maglev Google( https://research.google.com/pubs/pub44824.html ) Used in Google Cloud since 2008 Scalable load balancer Consistent hashing Connection Tracking Scale-out model backed by router’s ECMP Bypass kernel space for performance. Support connection persistence Network Architecture DNS - Routers - Maglevs - Service EndPoints. One service is served by one or more VIPs DNS returns VIP considering geolocation and load of location One VIP is served by multiple Maglevs Router use ECMP to select one Maglev One VIP is mapped to multiple Service EndPoints Maglev select Service EndPoint by seletion algorithm and connection tracking table Maglev use GRE to send incoming packet to Service EndPoint or another Maglev Send to IP fragment to another special Maglev servers Use only 3-tuple for IP fragment Each Service EndPoint use Direct Server Return(DSR) Maglev Controller Responsible for VIP announcement with BGP Check health status of forwarder If forwarder is not headthy, withdraw all VIP announcements Forwarder Each VIP has one or multiple backend pools(BP) BP contain physical IP address of the Service EndPoint Each BP has specific health checking methods - depends on the service requirement(just reachability or more) Config Manager parse and update configuration of forwarder’s behavior based on the Config Objects Sharding Sharding of Maglev enables service isolation - new service or QoS Backend Selection Consistent Hashing distribute loads Record selection in LOCAL connection tracking table Connection tracking table is not shared with another Maglev Does not guarantee consistency on Maglev or Service EndPoint Changes(add/delete) For different traffic type TCP SYN : select Backend and record it in connection tracking table TCP non-SYN : lookup connection tracking table 5-tuple : (maybe) lookup connection tracking table and select backend if not found Consistent Hashing If Maglev is added or removed, router select different Maglev for the exsiting session - ECMP is changed If one Maglev’s local connection tracking table is overflowed, it will lose previous selection To resolve this issues, Synchronize local connection tracking table between Maglevs -> overhead, overhead, overhead Consistent hashing for minimize disruption in member changes Maglev hashing - load balancing and minimal disruption on member changes reference Maglev: A Fast and Reliable Software Network Load Balancer Consistent Hashing The Simple Magic of Consistent Hashing

OPNFV Asia Meeting

ZTE Pharos Lab Pharos : OPNFV를 시험하는 worldwide lab 외부에서 접속하는 것이 필요하므로 ssh를 열어줄 수는 없어서 openVPN을 제공하고, 망을 분리하는 등의 이슈 해결을 위해 노력한 듯 https://www.opnfv.org/developers/pharos Pharos는 시험 환경을 구축해서 community에 공개해서 사용할 수 있는 환경을 제공하는 듯 Provide developers with substantial resources for early testing within realistic NFV environments via an open, consistent, repeatable test domain Why donate Pharos environment NFV 환경에 사용할 HW를 직접 제조하는 업체들에게는 제품 호환성에 대한 시험을 유도할 수 있는 장점을 가짐.

Integrating VES to OPNFV

VES project VNF Event Stream Project Demo vHello VES Demo in OpenStack Barcelona 2016 VES ONAP demo from OPNFV Summit 2017 From VF Event Streaming (VES) Project Proposal Alok Gupta 13 Jun, 2016 OPNFV VES.pptx VNF Event Stream Prpoposal OPNFV projects that potentially benefit from the VES project Fault Management ([Doctor] (https://wiki.opnfv.org/display/doctor)) *** Virtualized Infrastructure Deployment Policies (Copper) High Availability for OPNFV (Availability) Data Collection for Failure Prediction (Prediction) *** Audit (Inspector) Fault localization (RCA, Pinpoint ) *** Service Function Chaining (sfc) Moon Security Management OpenStack projects that potentially benefit from the VES project

OPNFV Barometer and VES

Summary The VES can be supported with the help of Kafka broker with the collectd in OPNFV Barometer project which is aim to collect telemetrics from the NFVI. Barometer Project The purpose of this project is providing metrics can be used to decide quality of NFVI. For this, the followings are reported NIC statistics Resources such as CPU, Memory, load, cache, themals, fan speeds, voltages and machine check exceptions. This means the output of this project will be used in the host itself as well as inside of VM.

(Summary) AT&T Container Strategy

From https://www.youtube.com/watch?v=rYRiH3HZFN4&t=3s Presented in OpenStack Summit 2017 Boston Container will be used for workload processing after 2019 VNF is differ from Enterprise IT worklaod(4:19) VNF is not a simple VM Maintain a state Complex network configuration Sophisticated Storage Connectivity HA is important 2018-2019 vendor and open source project especially openstack should do something to meet the requirements. Even it takes some time for container to replace VM for workload perspective, running the Openstack service as a container is possible today.

(News) Amdocs Brings an NFV Software Package Based on ONAP

September 12, 2017 https://www.sdxcentral.com/articles/news/amdocs-brings-nfv-software-package-based-onap/2017/09/ Amdocs announced its new NFV Powered by ONAP portfolio – a portfolio featuring modular capabilities that accelerate service design, virtualization and operating capabilities on demand. Service providers using technologies developed in ONAP and its ecosystem of capabilities can provide enterprises the ability to design their own networks as part of a richer set of service features. from : https://www.amdocs.com/media-room/amdocs-nfv-powered-onap-worlds-first-software-and-services-portfolio-carriers-based-open Amdoc? Amdocs was initially involved with AT&T’s home-grown ECOMP platform as an integrator.

Prometheus

Cloud Native Computing Foundation(http://cncf.io)에 포함된 Container monitoring tool. 집 맥미니에서 돌리고 있는 3개 container들을 관리하는데 사용할 수 있나 싶어(실은 관리할 것도 없지만 그냥 재미로 container monitor 기능을 보고 싶어서) 설치 해 봤다 Install cncf.io의 많은 툴이 그렇지만 golang으로 작성되어 있어 golang부터 설치했다 brew install go OSX에서 brew는 사용할 때마다 감탄을 금치 못하게 한다. 물론 우분투에도 apt가 있지만 apt보다 brew가 훨씬 편한 것 같다. 그 다음에는 그냥 docker hub에 있는 prometheus docker 가져다 설치

NGNIX for service mesh

NGINX Releases Microservices Platform, OpenShift Ingress Controller, and Service Mesh Preview NGNIX also join for service mesh bandwagon? NGNIX Application Platform NGINX Plus, the commercial variant of the popular open source NGINX web server. NGINX Web Application Firewall (WAF) NGINX Unit, a new open source application server that can run PHP, Python and Go NGINX Controller, a centralised control plane for monitoring and management of NGINX Plus Additional release a Kubernetes Ingress Controller solution for load balancing on the Red Hat OpenShift Container Platform an implementation of NGINX as a service proxy for the Istio service mesh control plane.

Wordpress with docker-compose

Under construction!! Error https://docs.docker.com/compose/wordpress/#define-the-project 에 있는 에제대로 docker-compose.yaml 파일을 만든 후 도전~~ 근데 실패 cychong:~/work/my_wordpress cychong$ docker-compose up -d Pulling db (mysql:5.7)... Traceback (most recent call last): File "docker-compose", line 3, in <module> File "compose/cli/main.py", line 68, in main File "compose/cli/main.py", line 118, in perform_command File "compose/cli/main.py", line 928, in up File "compose/project.py", line 427, in up File "compose/service.py", line 311, in ensure_image_exists File "compose/service.py", line 1016, in pull File "site-packages/docker/api/image.