LiveConfig: Standardseite für neue Web-Accounts

Wenn man eine Standardseite für neue Accounts einfügen möchte, auf der Beispielsweise „under construction“ oder „Hier entsteht eine neue Webpräsenz“ erscheint, kann man dies mit folgender Anleitung realisieren.

Schritt 1:
Zunächst erstellt man die gewünschte Seite. Diese kann in jeder beliebigen Sprache erstellt werden, die der Webserver unterstützt. Die Seite wird im Webroot oberhalb der Web-Ordner abgelegt. In den meisten Fällen ist das „/var/www/“. Der Dateiname ist egal.

Schritt 2:
Nun muss die custom.lua-Datei angepasst werden. Diese liegt auf dem LiveConfig-Client unter /usr/lib/liveconfig/lua/.
Dieser Datei fügt man folgende Zeilen hinzu:

MY= { }
MY.addAccount = LC.web.addAccount

function LC.web.addAccount(name, quota, shell, password)
  MY.addAccount(name, quota, shell, password)
  local home = LC.web.getWebRoot() .. "/" .. name
  LC.fs.copy("/var/www/default.html",home .. "/htdocs/" .. "index.html")
  LC.fs.setperm(home .. "/htdocs/" .. "index.html", 750, name, name)
  return true, home
end

Der Pfad „/var/www/default.html“ muss der in Schritt 1 angelegten Datei angepasst werden.
Der Dateiname „index.html“ muss ebenfalls angepasst werden, falls es sich nicht um eine html-Datei handelt.

Schritt 3:
Nun muss noch der LiveConfig-Client neugestartet werden, damit die neue Custom-LUA-Datei geladen wird.

service lcclient restart

Fertig.

Falls man LiveConfig mit mehreren Servern betreibt, muss diese Anleitung auf allen Client-Servern wiederholt werden.

Use AWS Route53 as DNS slave or sync local bind zones to AWS Route53

I had two problems, which i could solve with one software:
Case 1: I want to use Route 53 as Backup/Slave for my local bind servers
Case 2: I want to use my local zone for AWS without adding an additional name server to the EC2 machines
For those cases i’ve found: cli53
Project-URL: https://github.com/barnybug/cli53
The installation is realy easy:

wget https://github.com/barnybug/cli53/releases/download/0.7.4/cli53-linux-amd64 -O /usr/local/bin/cli53

After this you need an account at AWS with the Permissions:

  • route53:ListHostedZones
  • route53:ChangeResourceRecordSets
  • route53:ListResourceRecordSets
  • route53:GetChange
You are free to limit it to single zones or all zones.
In my case i’ve created an extra account for this and gave it the permission for only one zone with a Inline Policy via AIM:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "route53:ChangeResourceRecordSets",
                "route53:ListResourceRecordSets"
            ],
            "Resource": "arn:aws:route53:::hostedzone/##ZONEID##"
        },
        {
            "Effect": "Allow",
            "Action": [
                "route53:ListHostedZones"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "route53:GetChange"
            ],
            "Resource": "arn:aws:route53:::change/*"
        }
    ]
}
If you want to overtake this, you need to replace ##ZONEID## the Id of the Zone.
The next step is to create the Credentials for the user. Easily do it in AIM.
The credentials for the tool are stored in /root/.aws/credentials
You can define different profiles here.
[##profilename##]
aws_access_key_id = ##accesskey##
aws_secret_access_key = ##secret##
You need to replace all in ## with your profilename (no need to match aws account name) and credentials.
And finaly you can sync a domain to route53 with this command:
Warning: check the meaning of the command and the parameters with „cli53 help“ before you are executing anything. Should should know what you are doing!
cli53 import  –profile ##profilname## –file /etc/bind/db.##zone## –replace  ##zone##
Here you need to replace ##profilename## with the profile name from credentials and ##zone## with the name of your zone.

Use your own certificate with AWS Cloudfront

Due it is not easy to find in the docs of AWS, i’m will post it here to easier find it.

You need the aws-commandline tool for it. It must be configured with an account with FullAccess to Cloudfront.

After this you can add the Cert with this command:

aws iam upload-server-certificate --server-certificate-name ShowNameForTheCert --certificate-body file://publickey.crt --private-key file://privatekey.key --certificate-chain file://chain.ca-bundle --path=/cloudfront/

After you ran this command successfully, you can select the cert in the cloudfront configuration.

Elasticsearch recovery: try to recover [your-index][X] from primary shard with sync id but number of docs differ

If you have the follwing error, for example after diskspace problems:

[2016-05-13 12:09:41,770][WARN ][indices.cluster          ] [stage-elasticsearch1] [[logstash-2016.05.12.21][1]] marking and sending shard failed due to [failed recovery]
RecoveryFailedException[[logstash-2016.05.12.21][1]: Recovery failed from {stage-elasticsearch2}{5vAIif9rSRi4xthlji1pGQ}{192.168.216.12}{192.168.216.12:9300}{max_local_storage_nodes=1, master=true} into {stage-elasticsearch1}{I6DMfjXiQP61532HqJ77YA}{192.168.217.12}{192.168.217.12:9300}{max_local_storage_nodes=1, master=false}]; nested: RemoteTransportException[[stage-elasticsearch2][192.168.216.12:9300][internal:index/shard/recovery/start_recovery]]; nested: RecoveryEngineException[Phase[1] phase1 failed]; nested: RecoverFilesRecoveryException[Failed to transfer [0] files with total size of [0b]]; nested: IllegalStateException[try to recover [logstash-2016.05.12.21][1] from primary shard with sync id but number of docs differ: 668 (stage-elasticsearch2, primary) vs 669(stage-elasticsearch1)];

You can easily fix this with this command:

curl -XPOST http://localhost:9200/logstash-2016.05.12.21/_flush?force

Setup MySQL replication with Amazon AWS RDS Master and external slave

First, you need to create a RDS instance (if not exist).

Than you should check the current value for the RDS binlog settings:

admin@awsgate [(none)]> call mysql.rds_show_configuration\G
*************************** 1. row ***************************
name: binlog retention hours
value: NULL
description: binlog retention hours…
1 row in set (0.00 sec)

If the value is NULL, the binlogs will be deleted after RDS is not requiring this for internal slaves anymore.  Set it to the number of hours you want to keep the binlog. Please don’t forget, that every MB diskspace on Amazon costs money. So be aware of what you are setting here. For this howto, i’m choosing 24 hours:

admin@awsgate [(none)]>  call mysql.rds_set_configuration(‚binlog retention hours‘, 24);

Additionally we need a user for the replication:

admin@awsgate [(none)]> GRANT REPLICATION SLAVE ON *.* TO ’slave’@’%‘ IDENTIFIED BY  ‚Ultr4M3g4H!ghS3cur!ty‘;

The next step is to create a backup of your RDS instance. I’m using mydumper, because it is much faster and easiert to handle than mysqldump. You should know that mydumper will lock all tables for the backup. I recommend to setup a slave on RDS for creating the backup .

me@awsgate:~/dump/# /usr/bin/mydumper -u youruser-p ‚yourpassword‘ -h hostname.ident.region.rds.amazonaws.com -o ./  -e -c -k -l 120  –lock-all-tables

–lock-all-tables is required for getting the RDS master data.
–use-savepoints is not working with RDS, because you need SUPER permissions for that, which is not avaiable on RDS instances.

In my case i need to copy the backup to new slave machine. You can also do the export directly on the slave machine. Don’t forget to open the mysql port in the AWS security group for the RDS instance.

I recommend to not import the mysql database. It can result into errors:

me@slave:~/dump# rm -f mysql*

Now you can import the data:

me@slave:~/dump# myloader -d ./ -t 32 -u youruser -p yourpass

Now you need to find out the master log file and position:

me@slave:~/dump# cat metadata
Started dump at: 2015-11-24 14:26:56
SHOW SLAVE STATUS:
Host: 10.10.1.196
Log: mysql-bin-changelog.025550
Pos: 9170
Finished dump at: 2015-11-24 14:27:12

Now you can setup the master data on the slave and activate the slave process. You need to replace the values of MASTER_LOG_FILE and MASTER_LOG_POS with the values from the metadata file.

me@slave [(none)]> CHANGE MASTER TO MASTER_HOST=’hostname.ident.region.rds.amazonaws.com‘,MASTER_USER=’slave‘,MASTER_PASS=’Ultr4M3g4H!ghS3cur!ty‘,MASTER_LOG_FILE=’mysql-bin-changelog.025550′,MASTER_LOG_POS=9170;
me@slave [(none)]> START SLAVE;

The replication should be running now. Check it out:

me@slave [(none)]> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: hostname.ident.region.rds.amazonaws.com
Master_User: slave
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin-changelog.025557
Read_Master_Log_Pos: 9170
Relay_Log_File: mysqld-relay-bin.000016
Relay_Log_Pos: 9343
Relay_Master_Log_File: mysql-bin-changelog.025557
Slave_IO_Running: Yes
Slave_SQL_Running: Yes

Exec_Master_Log_Pos: 9170
Relay_Log_Space: 9574

Master_Server_Id: 950487267
Master_UUID: 43d7b440-4d6b-11e5-865d-06fdf6329d29
Master_Info_File: /var/lib/mysql/master.info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it

1 row in set (0.00 sec)

Puppet & Passenger UTF-8 Problems

After an update of puppet & passenger i got this error:

Warning: Unable to fetch my node definition, but the agent run will continue:
Warning: Error 400 on SERVER: invalid byte sequence in US-ASCII
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Failed when searching for node xxxx: invalid byte sequence in US-ASCII
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping ru

You can fix it by adding this line on the puppet master and restart passenger (-> restart apache2 or nginx):

echo „Encoding.default_external = Encoding::UTF_8“ >> /usr/share/puppet/rack/puppetmasterd/config.ru
service apache2 restart

After this puppet should run in UTF-8 Mode and the problem should be solved.

mozjpeg 3.0.0 on Debian and Ubuntu

There is no prebuild Debian/Ubuntu Package for mozjpeg 3.0.0.

Here is a short howto build this:

Install build requirements:

sudo apt-get install autoconf automake libtool nasm make pkg-config git

Go to your build directory and download source code from git:

git clone https://github.com/mozilla/mozjpeg.git

Prepare & build mozjpeg:

autoreconf -fiv
./configure –prefix=/usr
make

The default prefix is „/opt/mozjpeg“, i recommend to change this to „/usr“.

You can select to install it directly or build a .deb-Package:

Direct install: make install
Deb package: make deb

The „Deb package“-Method creates a Debian package with the filename:

mozjpeg_3.0_amd64.deb

You need to build it for each Distribution Version of each distribution, due to missing glibc bindings in the binaries.

Problem handling:

Problem: autoreconfig exists with the following error:

configure.ac:22: error: possibly undefined macro: AC_PROG_LIBTOOL
If this token and others are legitimate, please use m4_pattern_allow.
See the Autoconf documentation.

Solution: libtool is not installed, install libtool using „apt-get install libtool“

Problem: ./configure exists with the following error:

./configure: line 13146: syntax error near unexpected token `libpng,‘
./configure: line 13146: `PKG_CHECK_MODULES(libpng, libpng, HAVE_LIBPNG=1,‘

Solution: pkg-config wasn’t installed during automake. Install pkg-config (apt-get install pkgconfig) and run „autoreconf -fiv“ again.