How to: Configure a Dynamic DNS Client (DDClient) with NameCheap?

How to: Configure a Dynamic DNS Client (DDClient) with NameCheap?

Recently I came across a new set of DNS service providers that offer Dynamic DNS services (for free!) One that I ended up really liking was NameCheap. They have a nice user interface, and what’s best is that you can use your own domain name with them and they’ll provide Dynamic DNS services to you for for free! Obviously the free service is not top of the line, but as far as free is concerned and their ability to customize the DNS name, I am sold. Downside is that you need a domain name, which really for most of us that’s not really an issue nowadays. Now you may ask, why are you writting about this? Well, I had to set up this service and I really had no clue what to enter in the different fields my router was asking for. So I went ahead and did some research to figure out how to get this new service to work

How do I configure DDClient?

The following is the format of configuring ddclient.

password=your dynamic dns password

Here is the example of its usage. If you would like to setup dynamic dns for (without www), then following is its configuration:


If you need to dynamically update a subdomain then you need to subsititute host as subdomain. To dynamically update the domain itself (, you need to replace host with @.

What if I don’t have / want a client?

Well, aren’t you a picky one? Fortunately, there is a simple way to do this if you have a scheduler like cron. Take the following for example/reference:

How do I use the browser to dynamically update host’s IP?

Please substitute appropriate values for host, domain, password and IP:[host]&domain=[domain_name]&password=[ddns_password]&ip=[your_ip]

If you want to update an IP address for bare domain (e.g. yourdomain.tld), then you should specify the following details:

Host = @
Domain Name= yourdomain.tld
Dynamic DNS Password = Domain List >> click Manage next to the domain >>Advanced DNS tab >> Dynamic DNS. If it is not enabled, enable it to check the password.
IP Address= an optional value. If you don’t specify any IP, the IP from which you are accessing this URL will be set for the domain.

NOTE: The values for host name and domain must be in lower case. Please make sure your are using your Dynamic DNS password but not the Namecheap account’s one.

In this case the URL will have the following format:[email protected]&domain=yourdomain.tld&password=e747d77054a844409c486973cb&ip=

To dynamically update the IP address for a subdomain (test.yourdomain.tld), use test for Host (host=test). To dynamically update it for www.yourdomain.tld, use www for Host: host=www.

TIP: If you don’t want to use any DNS client, you can just bookmark the URL (after substituting proper values) and access it whenever you need to update your IP.

now think of a cron job using curl like so:

@hourly curl “`curl -s`”

and there you go! The trick here is to use echoip which returns your current ip address (well, the one you are using to access the site.) It is a dependancy on this working but at least this gives you an idea of how you could manually/automatically update your ip address. Hope this helps!

How to: Clone the mac address for the WAN interface on a Ubiquiti Unifi Security Gateway

How to: Clone the mac address for the WAN interface on a Ubiquiti Unifi Security Gateway


Nowadays you somewhat expect that cloning a mac address on a gateway/router would be a basic feature… but as Apple has taught us we don’t know what we want even if we need it. But getting back into topic, recently we started testing Ubiquiti products and part of that included the Unifi Security Gateway. One of the features we are not able to configure via the web administration console / controller is that of cloning a MAC address. For obvious reasons, we require to do that in order for our ISP to assign the correct IP address.


The solution is a bit more difficult than what I would have hoped for. As you probably already know, you are able to access the USG (Unifi Security Gateway) via SSH in order to configure several things. This has proven useful as the default IP it comes with is not compatible with our network. This time thought, it was useful in order to access the configuration console and change the MAC address of the WAN interface (eth0 in my case.) To do this, you need to follow these steps:

  1. Connect to your USG via SSH
  2. Run the following commands (where you replace the Xs for the right numbers):
set interfaces ethernet ethX mac XX:XX:XX:XX:XX:XX

Now, if you connect your Ethernet cable to the USG you’ll notice the change took place and you’re using the new mac address. But here comes the tricky part, when you reboot/restart the USG the settings will be lost! In order to persist the settings, you need to export the running configuration (after performing the changes/commands mentioned above) and include it in your config.gateway.json file. At this point I was like you, clueless as to what to do next. If you want to learn how to do this, please visit: UniFi – How to further customize USG configuration with config.gateway.json.

Now that you have prepared everything, it should look like this:


“interfaces”: {

“ethernet”: {

“eth0”: {

“mac”: “AB:CD:EF:01:23:45”





remember to put your actual MAC address there as well as confirm that your WAN interface is eth0. Also, don’t forget to validate the json file, you can use something like which is a JSON Validator. Just to make sure the syntax is correct.

Now, if you are using Ubuntu you need to create this json file on your controller. After logging in the default folder where you need to place the file is located at: /var/lib/unifi/sites/default. Keep in mind default here is the name of the site, so if your site has a different name you will need to access that folder not this one. Once you’re there, you will create a file (nano config.gateway.json) and paste there our configuration we obtained just before in json format. If you are doing this for different settings, you’ll need to add the interfaces section to your existing config file. One you’re done, reboot your USG and see if the setting worked. If it booted correctly and the MAC address changed as indicated, then you’re all set and good to go!

UniFi – How to further customize USG configuration with config.gateway.json

UniFi – How to further customize USG configuration with config.gateway.json

Obtained from:


The file config.gateway.json is used for advanced configuration of the USG. This file allows you to make customizations persistent across provisions.

When making customizations via the config.gateway.json file it is best to extract only the customizations that can’t be performed via the controller UI. This may take some patience because if you get the formatting wrong you’ll trigger a boot loop on the USG.

Some users may find they can get away with dumping the full config, but it’s possible that this could cause issues down the road. It could cause a bootloop when you change a setting via the controller UI.

By default, there is no such a file, a user has to create this file in order to use it. The config.gateway.json file is used to define is placed under [UniFi base]/data/sites/the_site directory.

For every site, you will find a unique random string that assigns to the site.  In above case, the random string ceb1m27d is the folder name that shall be used under [UniFi base]/data/sites/ (On the CloudKey use /srv/unifi/data/sites).  Therefore, in my case, I will create a folder named ceb1m27d underneath, and then place config.gateway.json inside.

Here are possible locations of the Unifi Base directory:

–Data folder locations

Vista+: C:\Users\username\Ubiquiti UniFi\data
XP: C:\Documents and Settings\username\Ubiquiti UniFi\data

Actual directory: /var/lib/unifi/
Symlinked directory: /usr/lib/unifi/data

Mac: /Applications/

Before customizing anything, you should check the existing config.boot to make sure you aren’t using an existing rule number (if applicable). You can do this several ways. I’m going to use SSH to connect to my USG and issue:

cat /config/config.boot

So for my example, I’m going to create a DNAT rule for DNS (this is just an example, may not be best use case). I’ll configure using EdgeOS formatting:

set service nat rule 1 type destination
set service nat rule 1 inbound-interface eth0
set service nat rule 1 protocol tcp_udp
set service nat rule 1 source port 53
set service nat rule 1 inside-address address
set service nat rule 1 inside-address port 53

Once I’m done, I want to export the config. That is done via:

mca-ctrl -t dump-cfg

Note, I don’t bother exporting to the file. You can if you wish. If you were to do that you would do:

mca-ctrl -t dump-cfg > config.txt

So I find the appropriate section in my config output:

                “nat”: {
                        “rule”: {

                               “1”: {
                                       “destination”: {
                                               “port”: “53”
                                       “inbound-interface”: “eth0”,
                                       “inside-address”: {
                                               “address”: “”,
                                               “port”: “53”
                                       “protocol”: “tcp_udp”,
                                       “type”: “destination”

So that’s my custom rule, but it’s not entirely in the format. If you look at the config output from the start, there is a certain format. If I wanted JUST this rule in the config.gateway.json, my file would look like:

“service”: {

                “nat”: {
                        “rule”: {
                               “1”: {
                                       “destination”: {
                                               “port”: “53”
                                       “inbound-interface”: “eth0”,
                                       “inside-address”: {
                                               “address”: “”,
                                               “port”: “53”
                                       “protocol”: “tcp_udp”,
                                       “type”: “destination”

If you have multiple sections to add, like say service and then VPN, the closing bracket for that section would be followed by a comma, then you would start the next section. For example service and VPN would be two separate sections.

It would be useful to validate your code. There are a number of free options out there if you search json validator via your favourite search engine.

Hopefully this gives some insight on how to create a config.gateway.json file.

Further reading

We have ran across a few particular scenarios where the config.gateway.json file has come in handy. Below is a list of resources of the different use cases we have ran across which might help you get an idea of what you can do or help you do it following our steps:

How to: Launch a Process that Persists even if you Disconnect your ssh Terminal

How to: Launch a Process that Persists even if you Disconnect your SSH Terminal

I’m not sure if this has happened to you before, but there are times when you have an application you wish to launch but if you disconnect from the terminal (SSH or physical) that process gets terminated. For example, currently I am running a proxy application that displays to the screen the status (connections to the upstream servers, work load, connection to the downstream clients, etc.). I want to be able to disconnect without causing it to be shut off. In order to achieve this, I am looking at using a terminal multiplexer. What this does is ti allows a user to access multiple separate terminal sessions inside a single terminal windows (physi8cal or remote like SSH.) This opens a new set of possibilities. If you are like me and sometimes end up with several SSH connections to the same server so you can observe different logs (thinking of tail -f) then what a multiplexer gives you is the ability to do so all in one connection. Thus far my favourite multiplexer is screen which comes built in with Ubuntu. Below I’ll get into the details on how to use it:

Screen is a terminal multiplexer, which allows a user to access multiple separate terminal sessions inside a single terminal window or remote terminal session (such as when using SSH).


The screen package can be installed on Ubuntu using apt-get or any other method you prefer. As far as I know it already comes installed with the latest versions (at least I haven’t needed to install it for some reason)

Starting with the Jaunty release, the screen-profiles package (later renamed Byobu) provides advanced features such as status bars, clocks, and notifiers. The package can also be manually installed on previous Ubuntu releases.


Screen can be started by typing


in a terminal. Press Enter after reading the introductory text.

Virtual terminals in Screen can be manipulated by pressing the Ctrl+A key combination, and subsequently pressing a key to execute one of the commands given below:

  • c creates a new virtual console

  • n switches to the next available virtual console

  • p switches back to the previous virtual console

  • lists all available virtual consoles and their assigned numbers

  • hitting a number key brings the corresponding virtual console to the foreground
  • Esc lets you scroll back and forth in your terminal output

  • d detaches the current screen sessions and brings you back to the normal terminal

When a Screen session is detached, the processes that were running inside it aren’t stopped. You can re-attach a detached session by typing

screen -r

in a terminal

To remove the annoying copyright notice at startup, edit your /etc/screenrc with

gksudo gedit /etc/screenrc

and remove the hash which begins the line

#startup_message off

Save the file, and you will not see it again 

For further information and more advanced commands, you can refer to the screen man page.

Como Extraer Archivos XML y PDF de Aspel SAE 6.0

Como Extraer Archivos XML y PDF de Aspel SAE 6.0


Antes con la versión 5.0 se guardaban en automático los recibos de CFDI y sus XML en una carpeta del sistema, inclusive, se organizaban de acuerdo a la fecha (una carpeta por año, una por mes y una por día), sin embargo, con la nueva versión 6.0 ya no se guardan automáticamente estos archivos.

Al principio esto nos pareció más que una mejora en el sistema, lo contrario. Sin embargo, indagando más la razón por la que ya no se generan estos archivos de forma automática se debe a que si existía algún virus ó cambiaban esa carpeta ó le cambiaban el nombre a los archivos, prácticamente se perdían toda esa información y ya no se podían reimprimir. Por lo que observamos ni Aspel ni los usuarios emplean políticas de software, respaldos y seguridad básicas y esta era una ocurrencia muy frecuente. Lo peor del caso, es que Aspel SAE contaba con esos archivos para su funcionamiento. No vayamos tan lejos, si la ruta la ponían en un USB y no estaba conectado solo con eso el sistema ya fallaba. De igual forma una unidad de red o un simple re-nombramiento de la carpeta como se menciono anteriormente causaba una falla en el sistema.

En la nueva versión de Aspel Sae 6.0 el sistema lo guarda directamente en la base de datos, con ello evitando problemas que existían con el sistema de archivos. Esto presenta claramente beneficios debido a los problemas que se presentaban anteriormente, sin embargo, a usuarios que contaban con este sistema para poder tener PDFs y XMLs a la mano, se volvió una desventaja. Personalmente hablando, deberían de haber permitido una opción de seguir guardando los archivos en una carpeta solo como respaldos del usuario y que el sistema se basara en la base de datos para evitar problemas.


En la nueva versión de Aspel SAE 6.0, los archivos XML y PDF se guardan en la base de datos. No existe forma de que se guarden en una carpeta de forma automática pero si es posible por medio del sistema obtener los archivos y manualmente guardarlos. Aunque no ideal, esto nos permite seguir guardando los archivos y solicitarlos si hay que hacer uso de ellos. Probablemente Aspel pensó que no hay necesidad de guardarlos en alguna carpeta visto que siempre están a la mano con el uso del programa.

Existe entonces un nuevo comando para poder obtener nuestros archivos que se llama “Extracción de CFDI.” Al oprimir este botón dentro del módulo de facturación, el sistema te solicitará una ruta donde guardar el XML y PDF de la factura digital seleccionada.

Cabe mencionar que uno puede seleccionar el número de facturas que quiera y el sistema guardara una copia del XML y PDF de toda factura digital seleccionada. De igual forma, si seleccionan una factura sin timbre el sistema no te va a dejar guardar ningún XML o PDF hasta que la selección sea exclusivamente de facturas ya timbradas. Solo tengan en mente que el sistema tiene que ir a la base de datos y obtener la información de cada factura a la vez y generar el PDF en ese momento. Si seleccionan 10 facturas estas pueden tomar dos minutos en generarse y guardarse. El sistema va guardando factura por factura en la carpeta, entonces pueden apreciar el progreso pero no es instantáneo. SI quieren guardar todas las facturas del año a la vez… esto puede tener al sistema ocupado un buen rato.

How to: “Change product key” in Windows 8 or in Windows Server 2012

How to: “Change product key” in Windows 8 or in Windows Server 2012

I’ve learned over the years to not activate a Windows product until I am confident it is stable. I say that because I used to activate Windows right after installation and either because of third party update, malfunctioning hardware, etc. I had to reinstall and eventually I had to call Microsoft to get my product key unlocked. However, something strange happened. In order to not activate windows right off the bat I installed it via the network which uses I guess an evaluation license or something. Once activation time came I just couldn’t type my license key. I really wasn’t ready to re-install as that defies the whole purpose of not activating on day cero. I did some research and found a way to provide once again your license / product key.


When you try to change the product key in Windows 8 or Windows Server 2012, you cannot find a “Change product key” link in the System item in Control Panel. This used to be available in previous versions of Windows but for some reason they decided to get rid of it. I can’t phantom why, seems like a pretty useful feature. For example, if you want to convert a default setup product key to a Multiple Activation Key (MAK) on a computer that is running Windows 8.

Fortuantely we have a few options at our disposal to change the product key in the new versions of Windows.


Method 1

  1. Swipe in from the right edge of the screen, and then tap Search. Or, if you are using a mouse, point to the lower-right corner of the screen, and then click Search.
  2. In the search box, type Slui 3.
  3. Tap or click the Slui 3 icon.
  4. Type your product key in the Windows Activation window, and then click Activate.

Method 2

  1. Swipe in from the right edge of the screen, and then tap Search. Or, if you are using a mouse, point to the lower-right corner of the screen, and then click Search.
  2. Type Command Prompt in the Search box.
  3. Right-click Command Prompt, and then click Run as administrator. If you are prompted for an administrator password or for a confirmation, type the password, or click Allow.
  4. Run the following command at the elevated command prompt:
    slmgr.vbs /ipk <Your product key>

Note You can also use the Volume Activation Management Tool (VAMT) 3.0 to change the product key remotely, or if you want to change the product key on multiple computers.


How to: Speed up CrashPlan by Reassigning Cache Folder to a Different Directory

How to: Speed up CrashPlan by Reassigning Cache Folder to a Different Directory

I’m guessing if you’ve bumped into this article is because you have deployed CrashPlan and you’re looking up for ways to speed it up. I have been a customer for several years now and I am happy with the solution from a cost/benefit perspective. One issue though is that we have our main storage server with Terabytes worth of data. What happens now is that CrashPlan out of the gate is not tuned up for handligh Terabytes worth of backup. So my first experience hitting the 2/3 Terabytes was CrashPlan simply crashing. You can read more about it here: How to: Prevent CrashPlan Pro from shutting down abruptly. Basically the Java Virtual Machine needed more RAM in order to load and display what was going on with the backup engine. The more data and complexity you add, the more memory it needs. If you haven’t tweaked your memory allowance, you should start there.

Moving on into the next tune up you can perfom. Let me picture you our scenario: As you know CrashPlan installs on the main system drive and the cache drive is stored on the Application Data folder, which coincidentally it is on the main system drive (by default). So one day I come into the office and by noon we are having several issues that can be tracked to our storage server, particulary, it’s performance. I could barely log in and finally when I start looking at the performance monitor I notice my system drive is 100% active with a huge Disk Queue Length. Baffled as nothing is stored on the system drive I sent in and found file on a \CrashPlan\Cache folder having the highes Total Bytes per second across the board.

What I learned that day is that CrashPlan is actively reading and writting to a cache store as it is performing a backup and validating if it needs to upload it or not. Our system drive although SSD was not designed to take that kind of heat. I decided it was time to figure out how to move that into our RAID drive which was designed for IO intensive operations.


As I pointed out, our solution was to move the CrashPlan’s cache store to a drive designed to handled IO intensive operations. In order to perform this feat you need to change the configuration via an XML offered by Code 42 developers (Thank you!) although it does come with a big disclaimer:

The information presented here is intended to offer information to advanced users. However, Code42 does not design or test products for the use described here. This information is presented because of user requests.

Our Customer Champions cannot assist you with unsupported processes, so you assume all risk of unintended behavior. You may want to search our support forum for information from other users.


CrashPlan stores a cache of temporary information on your computer, including information about your destinations, the data you have on your computer, and a number of settings that help CrashPlan run fast. If your cache becomes large, it is recommended you clear it to resolve the issue.

Alternatively, you can move the cache to a drive with more storage or higher IO speeds as described in this article; however, please note that this is an unsupported process.

Recommended Solution

  1. Stop the CrashPlan service
  2. Find my.service.xml
    If you used the default install location, then the file is located in the following directory:

    • Windows Vista, 7, 8, 10, Server 2008, and Server 2012: C:\ProgramData\CrashPlan\conf
      To view this hidden folder, open Windows Explorer and paste the path in the address bar. If you installed per user, see the file and folder hierarchy for file locations.
    • Windows XP: C:\Documents and Settings\All Users\Application Data\CrashPlan\conf
      To view this hidden folder, open Windows Explorer and paste the path in the address bar. If you installed per user, see the file and folder hierarchy for file locations.
    • OS X: /Library/Application Support/CrashPlan/conf/
      If you installed per user, see the file and folder hierarchy for file locations.
    • Linux: /usr/local/crashplan/conf
    • Solaris/opt/sfw/crashplan/conf
  3. Open the file in a text editor as an administrator (Windows) or with an editor that has root permissions (Mac)
    See External Resources for more information
  4. Find the line enclosed by <cachePath></cachePath>
  5. Change the file path inside the <cachePath> section to the file path where you want to move the cache
    For example: D:\Programs\CacheStorage
  6. Save the changes
  7. Restart the CrashPlan service
  8. Navigate to the appropriate directory below and delete the cache folder:
    • Windows Vista, 7, 8, 10, Server 2008, and Server 2012: C:\ProgramData\CrashPlan\cache
      To view this hidden folder, open Windows Explorer and paste the path in the address bar. If you installed per user, see the file and folder hierarchy for file locations.
    • Windows XPC:\Documents and Settings\All Users\Application Data\CrashPlan\cache
      To view this hidden folder, open Windows Explorer and paste the path in the address bar. If you installed per user, see the file and folder hierarchy for file locations.
    • OS X: /Library/Caches/CrashPlan
      If you installed per user, see the file and folder hierarchy.
    • Linux: /usr/local/crashplan/cache
    • Solaris/opt/sfw/crashplan/cache

Deleting the cache does not impact the data stored in your backups or change your settings. CrashPlan will rebuild your cache in the new location.

Alternative Solution

Alternatively, if you are comfortable with creating soft links (also know as symbolic links or symlinks), you can use a soft link to redirect data from the original cache folder to an alternative location.

External Resources

Please note that the instructions linked below are provided as a reference, but they have not been tested by Code42.

How to: Enable Shadow Copy or Previous Version in Windows 2012 R2?

How to: Enable Shadow Copy or Previous Version in Windows 2012 R2?

I recently had a bit of an issue with a program. It uses an access database and one of the employees modified it but we needed to revert the changes. Seemed simple enough, just reach out to pick a previous version from the server share and call it a day. I unfortunately realized that we had not activated Shadow Copy on the server. Enabling Shadow Copy allows you to configure how often the server takes “snapshots” of the files and allows you to go back in time and see the versions as they are modified. It is a pretty nifty feature that you need to activate manually. Word of caution, it does consume disk space.

So, moving on to the How To:

To enable and configure Shadow Copies of Shared Folders

  1. Click Start, point to Administrative Tools, and then click Computer Management.
  2. In the console tree, right-click Shared Folders, click All Tasks, and then click Configure Shadow Copies.
  3. In Select a volume, click the volume that you want to enable Shadow Copies of Shared Folders for, and then click Enable.
  4. You will see an alert that Windows will create a shadow copy now with the current settings and that the settings might not be appropriate for servers with high I/O loads. Click Yesif you want to continue or No if you want to select a different volume or settings.
  5. To make changes to the default schedule and storage area, click Settings.

As easy as that! I suggest you only enable Shadow Copies of Shared Folders for User shares. It is better to do a backup of a SQL database than having it perform Shadow Copies, this is really not a backup solution. Also, Shadow Copies is not recommended for IO intensive loads.

How to: Obtain historical stock prices from Yahoo finance (you can query them via Excel too)

How to: Obtain historical stock prices from Yahoo finance (you can query them via Excel too)



I was working on creating a spreadsheet to calculate profits and losses on options positions but didn’t know how to populate excel with stock quotes. Back in the day there used to be an interface to get stock quotes with the MSN Money site but it is not supported anymore. The idea behind this spreadsheet was to use the latest and historic quotes to calculate intrinsic values of options and P&L for expired ones. Kind of just trying to keep track of my record and evaluate performance. Of course the issue we face is that stock prices move every second and maintain all that data manually is not worth it. After some research I tried using Google Finance to populate Excel to no avail but found Yahoo Finance supports this more easily. I ended up writing a post to help others with regards to that: How to: Obtain stock quotes from Yahoo finance (you can query them via Excel too).

Just recently, a reader asked about making the query date specific. Yahoo! Finance does support getting historical (closing) prices. It is a very basic interface, so it comes with a lot of limitations. If you just want historic closing prices, then this is the place for you. Within the limitations I have found (obviously there are more but from my simple needs), I have identified these:

  • You are NOT able to get more than one Stock or Index at a time
  • You are NOT able to download data for everything (exchange rates is one example, there are some “weird/foreign” stocks as well)

but now that I have completely taken all the enthusiasm away from you, let’s get into the exciting part of how to get this to work:


In order to create the web query that will provide us with historical stock (closing) prices, we need to supply Yahoo some information:

  1. Stock Symbol (Mandatory)
  2. Date Range (Optional, if not provided it will return all data available)
  3. Internal (Optional, defaults to days if not provided)

The URL is composed as follows (step by step):

  • Starting URL:
  • Stock Symbol
  • Starting Date
    • Month (goes from 0 to 11, don’t ask me why. So if you want July which is the seventh month of the year, you need to supply 06)
    • Day
    • Year
    • Ending Date
      • Month (again, goes from 0 to 11)
      • Day
      • Year
      • Interval
        • Here you supply one of the three trading periods supported:
Name Tag
Daily d
Weekly w
Monthly m
  • And let’s tell Yahoo we want this as a CSV file

We’re done. Now, if you click the URL: you will start the download of your CSV file containing all the historical prices for Apple (AAPL) since 2010 through the end of 2015. Did you notice how I used https instead of http? Yahoo Finance supports https if you want to have your queries protected, which I recommend.

Results and Conclusions

Now, let’s study some sample the output:

Date Open High Low Close Volume Adj Close
04/01/2010 213.429993 215.589996 190.25 192.059998 215986100 25.547121

Let’s go column by column:

  • Date
    • This is the date for which the values correspond
  • Open
    • The opening price for APPL for the given period/interval (week in this case)
  • High
    • The highest price for AAPL for the given period/interval (week in this case)
  • Low
    • The lowest price for AAPL for the given period/interval (week in this case)
  • Close
    • The closing price for AAPL for the given period/interval (week in this case)
  • Volume
    • The trading volume for AAPL for the given period/interval (week in this case)
  • Adjusted Close (Adj Close)
    • The closing price, adjusted for splits and the like. For example, there was a 7:1 split in Apple, so the adjusted close is less than 1/7th of the actual closing price back in 2010. I believe besides splits; it also considers dividends in the adjusted price. If you look at the price between 1/10/2015 and 2/11/2015, you can see there is a 0.509338 price differential which is very close to the .52 dividend paid.

UPDATE (May 20th, 2017)

Some sad news for users of Yahoo Finance to obtain free stock quotes: Yahoo has changed the way you form the URL and has made it somewhat more difficult for you to use the service. I think for most cases it is not a viable service anymore. As reported by some of the comments, now the URL requires a parameter called CRUMB which is obtained from the cookies set when establishing a session with Yahoo. In other words, that crumb is unique to your session.

Your options:

1) You can do something like this:

Parse the HTML when you make a query using the web page: Example URL:

2) Use the new URL and somehow get your cookie and obtain the CRUMB:

What do we know?

Yahoo changed their URL scheme to require an established session with cookies. Potentially you could use ‘curl’ to establish a connection to the main site, get a cookie, and then use that to get a crumb. With that crumb, now you can form your download URL. Because the Crumb is part of the cookie, it doesn’t change but is part of your browser so it won’t work on a different browser with a different cookie. Because of that, you might get an error like this:

    "finance": {
        "error": {
            "code": "Unauthorized",
            "description": "Invalid cookie"

Do note as well that times changed to POSIX/UNIX timestamps. You’ll need to convert your dates now.

3) Use instead

A reader in the comments suggested it and it seems I was able to figure out how to use the API there. There is a catch of course: No historical prices… at least I wasn’t able to see an option for it. If you manage to get historical prices from Yahoo please share with us via the comments section. If this works for you, check out my post: How to: Obtain historical stock prices from Yahoo finance (you can query them via Excel too) Part II

4) Use Google instead

This probably means that for regular users Yahoo Finance is no longer a viable option to obtain stock quotes in Excel. I also can’t seem to be able to access on the excel web browser. My guess is that Yahoo has blocked access for the Excel client (very sneaky.) At this point my suggestion would be to use Google Finance. I’ll try to put together a post and reference it here so people can convert over.



How to: Upgrade your Dash MasterNode to the latest version

How to: Upgrade your Dash MasterNode to the latest version

If you ha’ve set up  a Dash MasterNode you probably thought the worse was over, but after that maniac upgrade cycle that took us through 50 something minor versions and lost payments, you know better now. Fortunately the upgrade process now is simpler and I thought about documenting because everytime I forget even the simple commands.


  1. Make sure you have the latest updates: apt-get upgrade && apt-get update
  2. Get the latest version
    1. Go to and get the download link ( at the time of writting)
    2. Perform a wget:
      1. wget
    1. Validate there weren’t any errors with the download performing a MD5SUM check:
      1. md5sum dash-
      2. The output “7645dbd0d41be87105c7f8dcf06ad105 dash-” needs to match the hatch on the site ( “7645dbd0d41be87105c7f8dcf06ad105 *dash-”
    2. Extract your .tar.gz file
      1. tar -xvzf dash-
    3. Copy the new executable to where you have the old one. Be careful to stop the service/executable before performing the copy, otherwise it won’t happen and you’ll get an error message reading “cp: cannot create regular file ‘/dash/dashd’: Text file busy
      “. You can reboot your machine considering you recently updated Linux and that would stop the process unless you have it to autostart.

      1. cp dash-0.12.0/bin/dashd /dash/dashd
    4. Start your Dash MasterNode
      1. /dash/dashd
        Dash server starting
    5. Check your log file to see if there are any issues
      1. tail -f .dash/debug.log


Congratulations, you’re done! We generally use DashNinja to validate our MasterNodes are running correctly.

Load more