How to make nice box-shadows

Recently I have been learning how to design better webpage elements. It is still mostly CSS stuff. CSS is the thing that has been around for so long and I have worked with it so many times, but not in a serious sense. But now I would really like to dive deeper and step up my CSS game.

Basic Syntax

Quoting directly from developer.mozilla.org:

/* Keyword values */
box-shadow: none;

/* offset-x | offset-y | color */
box-shadow: 60px -16px teal;

/* offset-x | offset-y | blur-radius | color */
box-shadow: 10px 5px 5px black;

/* offset-x | offset-y | blur-radius | spread-radius | color */
box-shadow: 2px 2px 2px 1px rgba(0, 0, 0, 0.2);

/* inset | offset-x | offset-y | color */
box-shadow: inset 5em 1em gold;

/* Any number of shadows, separated by commas */
box-shadow: 3px 3px red, -1em 0 0.4em olive;

/* Global keywords */
box-shadow: inherit;
box-shadow: initial;
box-shadow: unset;

/* Rules */
Specify a single box-shadow using:

 - Two, three, or four <length> values.
    - If only two values are given, they are interpreted as <offset-x><offset-y> values.
    - If a third value is given, it is interpreted as a <blur-radius>.
    - If a fourth value is given, it is interpreted as a <spread-radius>.
 - Optionally, the inset keyword.
 - Optionally, a <color> value.

To specify multiple shadows, provide a comma-separated list of shadows.

Transformed into examples:

box-shadow: 60px -16px teal;
box-shadow: 10px 5px 5px black;
box-shadow: 2px 2px 2px 1px rgba(0, 0, 0, 0.2);
box-shadow: inset 5em 1em gold;
box-shadow: 3px 3px red, -1em 0 0.4em olive;

Examples of a Nice Shadow

Okay, tbh these examples from Mozilla are outright random and ugly. Let’s make some better ones:

box-shadow: inset 0 0 10px 0 rgba(0, 0, 0, 0.06);
box-shadow: 0 20px 25px -5px rgba(0, 0, 0, 0.1), 0 10px 10px -5px rgba(0, 0, 0, 0.04);

The reason these shadows are much better is because they look more realistic; more subtle and blurred, which really lifts up the element from the paper. Actually these two examples are taken from Tailwind CSS(which is a very good, utility-first css library) with some slight modifications. The basic ingredients in making the second realistic box-shadow here are:

  • a slightly opaque black = rgba(0, 0, 0, 0.1)
  • a high blur radius = 25px
  • a negative spread = -5px
  • a downward offset

If you have these 3 elements in place, at least your shadow would look good, like this:

box-shadow: 0 20px 25px -5px rgba(0, 0, 0, 0.1);

But if you compare this back to the previous example, you will notice there is an extra layer of shadow (0 10px 10px -5px rgba(0, 0, 0, 0.04)) being applied. This extra layer is a more compact one and is even dimmer. It looks like this on its own:

box-shadow: 0 10px 10px -5px rgba(0, 0, 0, 0.04);

By combining the base shadow with this smaller and dimmer shadow, you get an even better look from the base shadow as the closer area to the box gets a little bit more darker, which adds some more realism to the feel of it.

Alternate Uses

Actually, beside being used as a shadow to boost the feel of material design, box-shadow can be used to add extra background layers to your images.

[3W1H] Explaining The WordPress Loop in Detail

This is an article explaining the 3W1H of the well-known WordPress Loop(abbr. as the Loop).

Tbh, not all of the 5W1H is suitable in explaining tech terms. So I removed the Who and Where from it. 5W1H is generally a good tool and starting point in understanding something more thoroughly and deeply. So, let’s get started!

I’ll start with WHY first.

WHY

In general, the Loop is used to display multiple posts on a page. When you want to display an array of data in any specific order, you want to loop through that array intuitively.

WHEN

Latest wordpress code at the time of writing is version 5.3.2.

Version of wordpress when have_posts() and the_post() is introduced is 1.5.0. These two functions will be explained below.

WHAT and HOW

The Loop is a few lines of PHP code used to display posts. Core functions are have_posts() and the_post().

<?php if ( have_posts() ) : while ( have_posts() ) : the_post(); ?>
<!-- Display post content here -->
<!-- e.g. <?= the_title() ?> This displays the title of the current post. -->
<!-- e.g. <?= the_content() ?> This displays the content of the current post. -->
<?php endwhile; else : ?>
	<p><?php esc_html_e( 'Sorry, no posts matched your criteria.' ); ?></p>
<?php endif; ?>

have_posts()

This function checks whether there are more posts available in the main WP_Query object to loop over. It calls have_posts() method on the global $wp_query object.

https://developer.wordpress.org/reference/functions/have_posts/

the_post()

Iterate the post index in the loop, that is to update the global $post object as the current post. So that when you call the_title() within the loop, the_title() gets the title of the global $post object for you.

Also it sets up the post-related data in the global scope by calling setup_postdata($post) (i.e. post id, post content or number of pages in the post, etc.), with $post being the global post object.

For details: check the source code for the_post() function in class-wp-query.php, and source code for the_title() function in post-template.php.

But How?

So where exactly and how do we use this piece of code?

I would like to pull in some quick wordpress basics here first. Whenever wordpress parses the user request, it follows the Template Hierarchy to find out which template file should be presented, and if no match is found, the theme’s index.php file will be used. So very likely, you would find the Loop within every theme’s index.php file.

In the default WordPress theme, there are template files for the index view, category view, and archive view, as well as a template for viewing individual posts. Each of these uses The Loop, but does so with slightly different formatting, as well as different uses of the template tags.

https://codex.wordpress.org/The_Loop_in_Action#The_Loop_In_Other_Templates

In short, you can use the Loop in template files like home.php, category.php, archive.php, taxonomy.php, etc… Well the Loop is even found in single.php in wordpress default twenty-something themes where you display a specific post. It seems unintuitive, but it is necessary, as the_post() also sets up the post’s data and content in the global scope just as I have mentioned above. To be exact, you will find that the_post() will call generate_postdata() to populate the global pages variable which contain the post_content. Then in the_content(), this global pages variable will be used to provide the content. You can test that yourself by removing the Loop and leaving the wrapped code behind, you’ll see.

A simple script to git pull remotely

For my simple private web projects when there are no script building or bundling, I just don’t need the whole CI/CD pipeline to get stuff working. I only needed git and Github. So here’s how I automate part of my deployment.

Before writing this bash script, I would have to first ssh into the remote server, them cd to the right directory and then do git pull, together with entering my credentials for the https authentication. Well, this is just a few steps but when you repeat them over and over and over… it just gets annoying.

ssh-ident

For more details in the steps below, please visit https://github.com/ccontavalli/ssh-ident.

The reason I use ssh-ident is that it

will create an ssh-agent and load the keys you need the first time you actually need them, once. No matter how many terminals, ssh or login sessions you have, no matter if your home is shared via NFS.

0. generate a pair of keys for your Github repo if you don’t have them. Upload the public key’s content to Github (Settings -> SSH and GPG Keys -> New SSH Key).

1. install ssh-ident according to the instructions on Github. Try to understand the commands so that you can customise it if you want.

2. create a ~/.ssh-ident file and put in the match conditions in order to instruct ssh-ident where to find the identities for different situation. But remember to create a corresponding directory (same name with the identity name you specify in the ~/.ssh-ident file) under ~/.ssh/identities for storing the keys.

3. logout, login and do a git pull once in order to let the ssh-ident save your key and your passphrase. Voila! From now on up til the lifetime of the added key(you can customise the lifetime by setting SSH_ADD_DEFAULT_OPTIONS in ~/.ssh-ident), whenever you perform git pull again in your remote server, ssh-ident comes to the rescue and save the hassle of inputting credentials again!

The Deployment Script

This script below is a simple bash script and it actually just does a few simple commands for you. The usage is like this:

./deploy.sh [demo/prod]

And that’s it! You just have to specify whether you are deploying to your demo server or production server; the ssh details are entailed inside the script.

#!/bin/bash

if [ "$1" != "" ]; then
    environment=$1
else
    echo "Please specify environment (demo/prod)."
    exit 1
fi

if [ "$environment" == "demo" ]; then
    target="ssh -i 'path/to/demo/pem' [email protected]"
elif [ "$environment" == "prod" ]; then
    target="ssh -i 'path/to/prod/pem' [email protected]"
else
    echo "Environment argument should be either demo or prod."
    exit 1
fi

# ssh and then perform git pull + restart server
cmd="$target 'cd path/to/code && git pull && ./restart_server.sh'"
eval $cmd

The Final Execution

After I have pushed the commits locally on my development machine, I just need to run this script and specify the targeted environment, then the deployment will be done in like within 10 seconds.

Hope someone will find this interesting or even useful! Please leave comments below if you have anything to ask!

DNS Records Terms Explained

MX – An MX record is the record on your domain that routes email traffic to the proper servers currently hosting your email.

A – A record (Address Record) points a domain or subdomain to an IP address.

CNAME – A CNAME (Canonical Name) points one domain or subdomain to another domain name, allowing you to update one A Record each time you make a change, regardless of how many Host Records need to resolve to that IP address.

TXT – A TXT (Text) record was originally intended for human-readable text. These records are dynamic and can be used for several purposes (like verifying domain ownership when someone tells you to add a TXT record to prove so).

SRV – An SRV (Service) record points one domain to another domain name using a specific destination port.

AAAA – The AAAA record is similar to the A record, but it allows you to point the domain to an Ipv6 address.

‘@’ in Host Record – The @ symbol is used to indicate the root domain itself. In our example the Host Record ‘ftp’ would be for the subdomain ftp.google.com and ‘@’ would be google.com itself.

DNS Propagation – When DNS Records are added or updated, the change can take up to 48 hours to take effect due to caching. When your domain is opened in a web browser, the request is not going to the hosting server directly. It has to pass through several ISP (Internet Service Provider) nodes first, so your computer starts by checking local DNS cache. Afterwards, the request is sent to your Internet Service Provider, and from there, to the hosting server. Each node will check its cache first, and because ISP’s refresh their caching at different intervals, it can take some time for changes you’ve made to reflect globally.

Docker 101 – Bind mounts or Volumes

This article is written for docker beginners to understand bind mounts and volumes and how to choose between them.

Bind mounts

  • A file or directory on the host machine is mounted into a container, it is created on demand if it does not yet exist
  • If you bind-mount to a non-empty directory in the container, that directory’s existing contents will be obscured by your directory on the host file system
  • Changes in the local directory will propagate to the bind-mounted directory in the container
  • Changes in the bind-mounted directory in the container also will propagate back to the local directory

Volumes

  • A new directory is created within Docker’s storage directory on the host machine, and Docker manages that directory’s contents
  • If you expose a container directory as a volume, its contents are copied into the volume on the host
  • Volumes are often a better choice than persisting data in a container’s writable layer, because a volume does not increase the size of the containers using it
  • Volumes are easier to back-up or migrate than bind mounts and can be more safely shared among multiple containers

How to Choose

They can be used in different situations. While both options offer a way to data persistence, I would suggest using volumes over bind mounts in normal situations as they are more stable to work with and support more functionalities. But if you are developing a system where you want to easily edit files on the local file system, bind mounts are more convenient in doing that.

Switching git branch names

Recently I was working on a project where there’s only the master initially, which uses docker for the environment.

Afterwards, as the project is ready to be launched, I made another branch named basic_setup which does not use docker anymore, but just uses the preset environment in the production server instead.

Then I think it would make more sense if I switch the branch names; the original master branch should be named development branch, and renaming basic_setup to master. So that whenever there are new changes I can always just work on the development branch using docker in my local environment which is definitely easier.

Here is the guide I followed and worked well for me in this use case:

https://multiplestates.wordpress.com/2015/02/05/rename-a-local-and-remote-branch-in-git/

IMPORTANT

But there is one tricky part if you are using gitlab or github where there is a default branch setting that forbids you from removing it in the remote repository. So you have to switch that default branch to something else before you can remove it.

Get an empty Ubuntu docker image up and running with a sudo-enabled user

I’m still a beginner in Docker and still don’t have a clue what most of the instructions in Dockerfile or docker-compose.yml do. I just understand the simple ones. So to consolidate my understandings and also shed light for others, I decided to just make a very simple Dockerfile for you, presumably also a beginner to get to play in a new docker container in a quick and easy way!

FROM ubuntu:latest

RUN apt-get update
RUN apt-get install -y sudo

RUN useradd -m docker && echo "docker:docker" | chpasswd && adduser docker sudo

USER docker

This is the Dockerfile. Simple and neat.

Just open a directory then create this file named “Dockerfile”. Okay, now the explanations:

Quoting from the Docker Official Documentation:

A Docker image consists of read-only layers each of which represents a Dockerfile instruction. The layers are stacked and each one is a delta of the changes from the previous layer…Each instruction creates one layer.

The FROM instruction sets the Base Image for subsequent instructions. Here you get to pick an image from the Public Repositories and pull it to your computer.

The RUN instruction will execute any commands in a new layer on top of the current image and result in a committed image for the next instruction.

First we do an apt-get update to know the latest versions of the packages from the repositories and then we apt-get install -y sudo, as the default image does not have it.

The third run does three things. First, useradd -m docker to create the user docker and its home directory. Second, echo "docker:docker" | chpasswd to set the password for the user docker (check this link). Third, adduser docker sudo to add docker into the sudo group so that docker becomes a sudo-enabled user.

Finally, the USER instruction sets the user to use when running the image and for any RUNCMD and ENTRYPOINT instructions that follows in the Dockerfile.

That’s all for the configurations part.


Now, time to run the image.

When you run an image and generate a container, you add a new writable layer (the “container layer”) on top of the underlying layers. All changes made to the running container, such as writing new files, modifying existing files, and deleting files, are written to this thin writable container layer.

First, build the image, with a name:

# format:
docker build -t a_name_you_like Path_of_Dockerfile
# example:
docker build -t new-ubuntu .

So assuming your console is already at the Dockerfile’s directory, you can use . for the path of Dockerfile.

Second, run the image and give a name to the container:

# format:
docker run -dt --name container_name image_name
# example:
docker run -dt --name new-ubuntu-ctnr new-ubuntu

Here we need two flags, -d for detached mode so that it does not consume the foreground of your console and -t for allocating a pseudo TTY. Okay, I admit that I don’t know what does -t mean here, but this flag can keep the container to continue running instead of exiting right after you run this command. And that’s important.

Now as the container is running, (you can check that by docker container ls), you can now ssh into it and play around like it’s a new ubuntu computer!

# format:
docker exec -it container_name command
# example:
docker exec -it new-ubuntu-ctnr bash

From the documentation, using flags -it and executing bash as command will create a new interactive Bash session on the container. Voila!

How to set up Git at a directory with an empty repository

Assuming you are already at the corresponding directory:

Step 1: git init

This command, quote from git documentation,

creates an empty Git repository – basically a .git directory with subdirectories for objectsrefs/headsrefs/tags, and template files.


Step 2: git add -A

The git add command adds changes in the working directory to the staging area, or more specifically, the index. The -A flag means all files in the entire working tree are updated. Quote about the index:

The “index” holds a snapshot of the content of the working tree, and it is this snapshot that is taken as the contents of the next commit. Thus after making any changes to the working tree, and before running the commit command, you must use the add command to add any new or modified files to the index.


Step 3: git commit -m 'message'

This will create a new commit from the contents in the index, together with the log message after the -m flag. The default master branch is also created at this point.

The new commit is a direct child of HEAD, usually the tip of the current branch, and the branch is updated to point to it.


Step 4: git remote add origin <url>

This command adds a remote named origin for the repository at <url>.


Step 5: git push -u origin master

Firstly, the simplest format of this command is: git push <remote> <branch>, where you push the specified <branch> to the <remote> repository. Since we are using the default “master” branch and the “origin” reference to our remote repository we have just added, thus “origin master” in our command.

For the -u flag, it’s a shorthand for –set-upstream. That is to tell git to make your local branch to track the remote-tracking branch; in this case, the local master branch will track the origin/master branch. More explanation below.

Remote-tracking branches are references to the state of remote branches. They’re local references that you can’t move; Git moves them for you whenever you do any network communication, to make sure they accurately represent the state of the remote repository.

Checking out a local branch from a remote-tracking branch automatically creates what is called a “tracking branch” (and the branch it tracks is called an “upstream branch”). Tracking branches are local branches that have a direct relationship to a remote branch. If you’re on a tracking branch and type git pull, Git automatically knows which server to fetch from and which branch to merge in.

Setting an upstream branch has the advantage of telling Git to get/use the correct remote-tracking branch whenever you do a git pull, git push or git rebase, etc.

Setting HTTP Cache Headers at server-side (PHP+Apache)

Recently I was trying to debug a WordPress site. I notice from the response headers on the pages, most of them have a “cache-control: max-age=600, public” header. Since I’m constantly changing code on certain pages and would like to see immediate effect, I decided to change the cache-control header to “no-store, must-revalidate”.

In the functions.php, I added this line at the top

header("Cache-Control: no-store, must-revalidate");

Even after that, the response header is still cache-control: max-age=600, public. I was googling about the issue for like an hour to no avail. Then I suddenly remember one can also set Headers in Apache settings. And voila, there it was. I found this line in the site settings (under the folder /etc/apache2/sites-available/)

Header set cache-control "public, max-age=30"

So this is the culprit. After I changed it to “no-store, must-revalidate” and restarted the apache server by service apache2 restart, the cache-control header in the response headers are finally correct!

All I wanted was just to install nodejs…and npm…

Well recently, It happens that I need to update my nodejs package on my ubuntu server (18.04.2). I type node -v and see that it is v8.x something and the LTS version now is 10.16.3, so I set out to upgrade it. After following some simple guides online, I simply can’t get it to work. Even after I installed the latest nodejs, the version checking still gives me the old version with both node and npm. WTH!

Without the time to dive in and find out what is the culprit, I completely removed previous node and npm stuff on my ubuntu using good old sudo rm -rf node-shit following this guide https://amcositsupport.blogspot.com/2016/07/to-completely-uninstall-node-js-from.html

Then I see many guides say that you have to first execute

curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -

before running this

sudo apt-get install -y nodejs

So, firstly after running the curl command above, there’s a line of text near the end saying ## Run `sudo apt-get install -y nodejs` to install Node.js 10.x and npm. So then I run sudo apt-get install -y nodejs, and nodejs -v gives me 10.16.3. Great! Next I do npm -v, well…the console gives out

-bash: /usr/local/bin/npm: No such file or directory

Shit! WTH! Even more bad news, I can’t even get the nvm approach to install my node and npm as well…I’m starting to doubt that it is my server’s peculiar case but not that those guides have failed others.

After two hours of trying, finally I saw the holy light in the dark! https://www.digitalocean.com/community/tutorials/how-to-install-node-js-on-ubuntu-18-04 is my ultimate lifesaver.

I have to follow its “Installing Using a PPA” approach in order to make things work. God bless the guy who wrote this guide! I guess the magic in this working approach is in its nodesource_setup.sh file. I really hope that this can save someone who is in the same weird shit situation that I once was.