docker network 与 本地 network 网段冲突

起因:

公司部署在hk的爬虫服务器突然挂掉了。后来发现只是在深圳办公区无法访问。排查后发现原因是docker的网络(包括docker network的subnet或者是某个容器的ip)与该host在内网的ip段相同,导致冲突。

排查过程:

有两个方面需要排查。一个是docker服务启动时的默认网络。

默认网络使用bridge桥接模式,是容器与宿主机进行通讯的默认办法。

修改默认网段可以参考 http://blog.51cto.com/wsxxsl/2060761

除此之外,还需要注意docker创建的network的网段。

使用docker network ls 命令查看当前的网络

然后可以使用docker inspect 查看每个network的详细信息。

也可以直接使用ip addr 来查看各种奇怪的虚拟网卡的ip,是否有前两位的地址和host的ip地址相同的。

解决办法:

本想在docker-compose up 时指定默认网络的subnet

结果发现好像并不支持?version 1.10.0 error on gateway spec

Was there any discussion on that? I do need to customize the network, because my company uses the 172.16.0.0/16 address range at some segments and Docker will simply clash with that by default, so every single Docker server in the whole company needs a forced network setting.

Now while upgrading my dev environment to Docker 1.13 it took me hours to stumble into this Github issue, because the removal of those options was completely undocumented.

So please, if I am working on a network which requires a custom docker subnet, how am I supposed to use Docker Compose and Docker Swarm?

最后用了个比较间接的办法。

先手动创建一个docker network,然后再在docker-compose的配置文件中指定。

 

 

 

 

 

 

记一次在 docker compose 中使用volume的踩坑记录

现象:

使用docker compose 挂载 named volume 无效(且没有错误提示)

排查过程:

一开始是没有使用docker-compose命令,直接使用docker run  -v 命令,挂载两个绝对路径,没有问题。

然后使用named volume,在这里使用了local-persist 插件,来指定数据卷(volume)在host上的位置。直接用docker run  -v 命令,依然没有问题。

接下里打算放到docker compose里面,发现并没有挂载成功。

但是在docker compose里面,挂载两个绝对路径是ok的。

于是怀疑是volume的问题

此时使用docker inspect 查看 用docker compose 启动起来的,挂载named volume的容器

发现mount里面,挂载的named volume并不是我在docker-compose.yml填写的名称,而是多了一个前缀,这个前缀恰好是docker-compose.yml 文件所在的目录名称。

查了一下,发现果然不止我一个人被坑到orz Docker-compose prepends directory name to named volumes

其实应该直接使用docker inspect来排查的…应该会更快找到问题

解决办法:

有几种解决办法:

  • 不手动创建volume,而是在docker-compose.yml中,设置volume的mountpoint
  • 在docker-compose.yml中,添加external: true的选项到 volume中,参考external

顺便附上我的docker-compose.yml文件

 

 

 

 

How to use Scrapy with Django Application(转自medium)

在meidum上看到一篇很赞的文章…无奈关键部分一律无法加载出来…挂了梯子也不行,很心塞…刚刚突然发现加载出来了…以防之后再次无法访问,所以搬运过来.

There are couple of articles on how to integrate Scrapy into a Django Application (or vice versa?). But most of them don’t cover a full complete example that includes triggering spiders from Django views. Since this is a web application, that must be our main goal.

What do we need ?

Before we start, it is better to specify what we want and how we want it. Check this diagram:

It shows how our app should work

  • Client sends a request with a URL to crawl it. (1)
  • Django triggets scrapy to run  a spider to crawl that URL. (2)
  • Django returns a response to tell Client that crawling just started. (3)
  • scrapy  completes crawling and saves extracted data into database. (4)
  • django fetches that data from database and return it to Client. (5)

Looks great and simple so far.

A note on that 5th statement

Django fetches that data from database and return it to  Client. (5)

Neither Django nor client don’t know when  Scrapy completes crawling. There is a callback method named  pipeline_closed, but it belongs to Scrapy project. We can’t return a response from Scrapy  pipelines. We use that method only to save extracted data into database.

 

Well eventually, in somewhere, we have to tell the client :

Hey! Crawling completed and i am sending you crawled data here.

There are two possible ways of this (Please comment if you discover more):

We can either use  web sockets to inform client when crawling completed.

Or,

We can start sending requests on every 2 seconds (more? or less ?) from client to check crawling status after we get the  "crawling started" response.

Web Socket solution sounds more stable and robust. But it requires a second service running separately and means more configuration. I will skip this option for now. But i would choose web sockets for my production-level applications.

Let’s write some code

It’s time to do some real job. Let’s start by preparing our environment.

Installing Dependencies

Create a virtual environment and activate it:

Scrapyd is a daemon service for running Scrapy spiders. You can discover its details from here.

python-scrapyd-api is a wrapper allows us to talk  scrapyd from our Python progam.

Note: I am going to use Python 3.5 for this project

Creating Django Project

Create a django project with an app named  main :

We also need a model to save our scraped data. Let’s keep it simple:

Add  main app into  INSTALLED_APPS in  settings.py And as a final step, migrations:

Let’s add a view and url to our  main app:

I tried to document the code as much as i can.

But the main trick is,  unique_id. Normally, we save an object to database, then we get its  ID. In our case, we are specifying its  unique_id before creating it. Once crawling completed and client asks for the crawled data; we can create a query with that  unique_id and fetch results.

And an url for this view:

Creating Scrapy Project\

It is better if we create Scrapy project under (or next to) our Django project. This makes easier to connect them together. So let’s create it under Django project folder:

Now we need to create our first spider from inside  scrapy_app folder:

i name spider as  icrawler. You can name it as anything. Look  -t crawl part. We specify a base template for our spider. You can see all available templates with:

Now we should have a folder structure like this:

Connecting Scrapy to Django

In order to have access Django models from Scrapy, we need to connect them together. Go to  settings.py file under  scrapy_app/scrapy_app/ and put:

That’s it. Now let’s start  scrapyd to make sure everything installed and configured properly. Inside  scrapy_app/ folder run:

$ scrapyd

This will start scrapyd and generate some outputs. Scrapyd also has a very minimal and simple web console. We don’t need it on production but we can use it to watch active jobs while developing. Once you start the scrapyd go to http://127.0.0.1:6800 and see if it is working.

Configuring Our Scrapy Project

Since this post is not about fundamentals of scrapy, i will skip the part about modifying spiders. You can create your spider with official documentation. I will put my example spider here, though:

Above is  icrawler.py file from  scrapy_app/scrapy_app/spiders. Attention to  __init__ method. It is important. If we want to make a method or property dynamic, we need to define it under  __init__ method, so we can pass arguments from Django and use them here.

We also need to create a  Item Pipeline for our scrapy project. Pipeline is a class for making actions over scraped items. From documentation:

Typical uses of item pipelines are:

  • cleansing HTML data
  • validating scraped data (checking that the items contain certain fields)
  • checking for duplicates (and dropping them)
  • storing the scraped item in a database

Yay!  Storing the scraped item in a database. Now let’s create one. Actually there is already a file named  pipelines.py inside  scrapy_project folder. And also that file contains an empty-but-ready pipeline. We just need to modify it a little bit:

And as a final step, we need to enable (uncomment) this pipeline in scrapy  settings.py file:

Don’t forget to restart  scraypd if it is working.

This scrapy project basically,

  • Crawls a website (comes from Django view)
  • Extract all URLs from website
  • Put them into a list
  • Save the list to database over Django models.

And that’s all for the back-end part. Django and Scrapy are both integrated and should be working fine.

Notes on Front-End Part

Well, this part is so subjective. We have tons of options. Personally I have build my front-end with  React . The only part that is not subjective is  usage of setInterval . Yes, let’s remember our options:  web sockets and  to send requests to server every X seconds.

To clarify base logic, this is simplified version of my React Component:


 

 

You can discover the details by comments i added. It is quite simple actually.

Oh, that’s it. It took longer than i expected. Please leave a comment for any kind of feedback.

 

 

 

 

lua学习笔记

lua是一门轻量级的脚本语言…好像比较适合写游戏?在 太阳神三国杀 中见过很多lua脚本。 由于splash 的渲染脚本需要用lua来写,因此来学习一波。

直接上语法…看到了python和pascal的影子orz

 

golang 学习笔记

先放资料,可能比较侧重于go在系统调用方面的内容.

这里不会记录详细的go的语法,只会记录学习的过程,踩到的坑,以及其他我认为值得记录的内容.

go的switch语句终于是人类思维的语句了…匹配中了不需要加break..

defer关键字可以延迟语句到上层函数退出时再执行,而且是会把延迟的语句压入栈,然后按照FILO的顺序执行…好像有点有意思?

参数列表..如果有多个变量的类型相同,只写一个类型关键字就行…

:=并不是pascal中的赋值符号(浪费感情…,而是简洁定义变量的语法,不能使用在函数以外.

感觉go中同时有一点C++和很多python的影子…

 

 

 

30分钟上手GO语言–基础语法

A Go Programmer’s Guide to Syscalls

视频笔记:Go 和 syscall – Liz Rice

爬虫学习笔记

再次迫于生计。。。

 

参考了面向新人的 Python 爬虫学习资料

大致的学习路线为:

一: 简单的定向脚本爬虫( request — bs4 — re )

二: 大型框架式爬虫( Scrapy 框架为主)

三:浏览器模拟爬虫 ( Mechanize 模拟 和 Selenium 模拟)

有Python基础和一点html基础的话。。。貌似上手是0难度的

年轻人的第一个爬虫(虽然代码是直接copy的…

 

年轻人的第二个爬虫:https://github.com/111qqz/spider-demo,爬了我家一周的天气情况

爬虫能够work我觉得主要取决于两个因素

一个是,一个网站的网页源码,其实是在我们本地存储的

另一个是,网页的代码是有规律的…

所以初级的爬虫的难度就仅仅在于找规律。。。然后配合chrome 开发者工具的模拟点击功能和 xpath这种文本解析工具… 就可以搞定了。。。

关于反爬虫的处理办法,以及如何提高爬虫的速度,可能才是“爬虫工程师”的核心技能?

参考资料:

BeautifulSoup官方文档,一个将html数据结构化的python库

Scrapy官方文档,一个爬虫框架

 

java-grpc 踩坑记录

最近的项目需要java和python之间的进程通信,想到了之前使用过的的grpc.

参考官方quickstart

  • JDK: version 7 or higher

看起来只依赖jdk,美滋滋

然后按照文档执行

./gradlew installDist

报错:

看起来是gcc或者clang的问题… 先装个clang再说,可能clang太常用了,所以文档没有提到,这下一定可以了。

然而又报错:

??????

看起来是stdc++的lib没有找到。

在centos上安装了:libstdc++.x86_64 4.8.5-28.el7_5.1 @updates
libstdc++-devel.x86_64 4.8.5-28.el7_5.1 @updates
libstdc++-static.x86_64

这回一定没问题了吧?

然而继续报错,变成了javedoc找不到

简直了。。。grpc的文档还是一如既往的辣鸡。。。目前还在坑里面,先把前面踩的坑记录一下。

然后继续报错:

 

好像不是很对啊…

文档固然辣鸡,但是会不会有其他打开方式?

于是跑到工程院那边请教了同事…发现果然打开方式不正确…

正确的方式是直接在依赖管理工具如maven中添加,并不需要手动编译安装…

参考这个搞它一发先

同时按照maven-in-five-minutes 了解一下maven的使用

Although hardly a comprehensive list, these are the most common default lifecycle phases executed.

  • validate: validate the project is correct and all necessary information is available
  • compile: compile the source code of the project
  • test: test the compiled source code using a suitable unit testing framework. These tests should not require the code be packaged or deployed
  • package: take the compiled code and package it in its distributable format, such as a JAR.
  • integration-test: process and deploy the package if necessary into an environment where integration tests can be run
  • verify: run any checks to verify the package is valid and meets quality criteria
  • install: install the package into the local repository, for use as a dependency in other projects locally
  • deploy: done in an integration or release environment, copies the final package to the remote repository for sharing with other developers and projects.

There are two other Maven lifecycles of note beyond the default list above. They are

  • clean: cleans up artifacts created by prior builds
  • site: generates site documentation for this project

发现mvn package的时候会报几百行的错误

于是先执行mvn validate验证一下情况,
最上面的错误如下:

参考An exception occurred when I use maven plugin, Why?  发现可能是maven版本不行,去官网下载了目前的最近版本3.5.4,ok了

下一步该进行mvn compile,也ok

然后是mvn test,又是几百行错误…核心的信息是java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument

参考java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument

添加了两个依赖(再次吐槽grpc-java官方文档&demo的不靠谱)

似乎是ok了.

然后执行mvn package,继续报错,好在只有几十行…

摘重点: Unable to find a single main class from the following candidates

看样子是找不到程序入口在哪里了…好像很合理,毕竟grpc-java的example里每个文件都带了main函数

参考SpringBoot: Unable to find a single main class from the following candidates,在pom.xml中添加如下的属性就行了.

 

然后另一边,将python的server启动起来

注意ip和端口号要保持一致. ok了

 

最后的代码请参考grpc-java-maven-exmaple

spring 学习笔记

迫于生计,又要从零开始学习spring.

在这篇文章之前,对java的基础是2015年写过一个java大作业,对spring是一无所知。

为了学习spring,我按顺序做了以下事情:

 

 

 

 

 

 

[spring] 依赖注入

真是个不明觉厉的术语…其实是个特别简单的概念orz

用白话讲,如果一个class A中用到了class B的实例,那么class B的实例就是class A的依赖,如果不是在class A中定义class B的实例,而是通过某个接口,将class B的实例传入classA,就叫依赖注入。

 

依赖注入(DI)和控制反转(IOC)基本是一个意思,因为说起来谁都离不开谁。

简单来说,a依赖b,但a不控制b的创建和销毁,仅使用b,那么b的控制权交给a之外处理,这叫控制反转(IOC),而a要依赖b,必然要使用b的instance,那么

  1. 通过a的接口,把b传入;
  2. 通过a的构造,把b传入;
  3. 通过设置a的属性,把b传入;

这个过程叫依赖注入(DI)。

那么什么是IOC Container?

随着DI的频繁使用,要实现IOC,会有很多重复代码,甚至随着技术的发展,有更多新的实现方法和方案,那么有人就把这些实现IOC的代码打包成组件或框架,来避免人们重复造轮子。

所以实现IOC的组件或者框架,我们可以叫它IOC Container。

作者:phoenix
链接:https://www.zhihu.com/question/32108444/answer/220819349
来源:知乎
著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。

参考资料:

Dependency Injection Demystified

What is dependency injection?

 

 

learn java in 21 minutes for C++ Programmers

先放资料:

Learning a New Programming Language: Java for C++ Programmers

 

java package

先说几条重要的人话:

  • 一个java文件第一行可以声明该文件所属于的package,package的名字必须与整个工作目录的路径名相同。
  • 同一个package下的class默认有互相访问的权限。
  • 访问属性设置为public的class,如果该class所在的file声明了package,那么可以被其他package下的class访问到。
  • .java的文件名必须与文件中设置为public的class名保持一致(如果没有public的类,那么名称任意)
  • Every class is part of some package.
  • All classes in a file are part of the same package.
  • You can specify the package using a package declaration:
      • package

    name

       ;

    as the first (non-comment) line in the file.

  • Multiple files can specify the same package name.
  • If no package is specified, the classes in the file go into a special unnamed package (the same unnamed package for all files).
  • If package name is specified, the file must be in a subdirectory called name (i.e., the directory name must match the package name).
  • You can access public classes in another (named) package using:package-name.class-nameYou can access the public fields and methods of such classes using:package-name.class-name.field-or-method-nameYou can avoid having to include the package-name using:
      • import

    package-name

      .*;

    or

      • import

    package-name.class-name

      ;

    at the beginning of the file (after the package declaration). The former imports all of the classes in the package, and the second imports just the named class. You must still use:class-nameto access the classes in the packages, andclass-name.field-or-method-nameto access the fields and methods of the class; the only thing you can leave off is the package name.

下面是一些例子:

Assume that you are working in a directory called Javadir, and that you create four files, whose contents are shown below.

Here are the directories and file names you must use:

  • File 1 must be in a subdirectory named ListPkg, in a file named List.java.
  • File 2 must also be in the ListPkg subdirectory, in a file named NoNextItemException.java.
  • File 3 must be in a file named Test.java (in the Javadir directory).
  • File 4 can be in any .java file (in the Javadir directory).

And here are the classes that can be accessed by the code in each file:

  • Files 1 and 2:
    • The code in the first two files (ListPkg/List.java and ListPkg/NoNextItemException.java) can access the classes defined in the same package (List, ListNode, and NoNextItemException). (No access was specified for those classes, so they get the default, package access.)
    • The code in files 1 and 2 cannot access class Test, even though it is a public class. The problem is that Test is in an unnamed package, so the code in the ListPkg package has no way to import that package, or to name class Test.
    • The code in files 1 and 2 cannot access classes Utils and Test2, because they have default (package) access, and are in a different package.
  • Files 3 and 4:
    • The code in file 3 (Test.java) can access classes ListPkg.List, ListPkg.NoNextItemException, Test, Utils, and Test2 (the first two because they are public classes in a named package, and the last three because they are in the same, unnamed package, and have either public or package access). Note however, that if the code in Test.java uses the class Test2, and that class is not in a file called Test2.java, then the file that contains class Test2 must be compiled first, or else the class will not be found.
    • The code in file 4 (the file that contains class Test2) can access the same classes as the code in file 3 (Test.java).

前端To do list

20181014update: 可以不写了,开心

迫于生计,要从零开始学习前端。

由于之前的to do list 年代久远+ 前端的技术栈可之前几乎没有关系,因此新开一篇记录。

  • css盒子模型
  • 布局,flex
  • 前端debug的方法
  • javascript,有空可以使用js刷leetcode练习语法
  • typescript
  • jsx,以及jsx的typescript版tsx
  • 学习axios https://alligator.io/react/axios-react/
  • 学习dva: https://github.com/sorrycc/blog/issues/62
  • 学习umijs https://umijs.org/zh/guide/with-dva.html#%E7%89%B9%E6%80%A7

typescript学习笔记

先放参考资料:

TypeScript 入门教程

React & Webpack

react-typescript-cheatsheet (强推一波,讲了很多react+ts的实践)

typescript是javascript的语法扩展。。。好处是提供了类型。。可以在编译(结果为js文件)的时候提供静态的类型检查。。。

typescript的问号语法:标记某个参数为可选。

例子:

 

关于typescript的类型推断。。如果在定义时直接赋值则会进行推断,否则会推断类型为any.

 

当 TypeScript 不确定一个联合类型的变量到底是哪个类型的时候,我们只能访问此联合类型的所有类型里共有的属性或方法

类型断言:

类型断言不是类型转换,断言成一个联合类型中不存在的类型是不允许的

泛型:

泛型约束:传入的模板参数必须包含接口的形状:

上面的代码表示,传入的参数必须包含一个类型为number,名字为length的属性

这里要强调的是,这里名字是重要的。如果把length改为其他名字,将不能通过编译。

 

定义一个普通的class属性(不是props)的办法:

在构造函数之前声明即可,参考如下代码中的pointer:number

 

 

 

 

JavaScript 学习笔记

暂时没空从头开始搞…用到哪里先记录一下好了orz

我觉得不行,还是要先大致了解一下。

参考资料:

A re-introduction to JavaScript (JS tutorial)

继承与原型链