Thursday, November 23, 2017

Checking the version of your MongoDB instance

If you want to know the exact version of your MongoDB instance, simply use:

 db.version();

It couldn't be simpler!

Thursday, October 26, 2017

Good news, everyone!

Millennials are obsessed with side hustles because they’re all we’ve got
https://qz.com/711773/millennials-are-obsessed-with-side-hustles-because-theyre-all-weve-got/

Modern B-Tree Techniques
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.219.7269&rep=rep1&type=pdf

The Asynchronous Computability Theorem
https://medium.com/@eulerfx/the-asynchronous-computability-theorem-171e9d7b9423

The Times I’ve Messed Up As A Developer
https://medium.com/@zacharykuhn/the-times-ive-messed-up-as-a-developer-3c0bcaa1afd6

The female code-breakers who were left out of history books
http://www.bbc.com/future/story/20171009-the-female-code-breakers-who-were-left-out-of-history-books

Post-Scarcity Economics
https://lareviewofbooks.org/article/post-scarcity-economics

Your brain does not process information and it is not a computer

Recovering a crashed MySQL instance

Raise your hand if you've never had a problem with corrupt government officials... oops, MySQL databases.

Last week I was importing a new set of tax calculation rules to our tax engine but forgot to check if I had enough RAM and swap on my machine to do that. As a result, memory consumption hit the ceiling and the OOM killer killed the MySQL process.

When I restarted the machine and tried to connect to MySQL server I was greeted with this message:

 Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock'

Trying to stop/start the MySQL process (a trick we all learned in the good old Windows) didn't help.

Checking the MySQL error log in /var/log/mysql/error.log I could spot the following suspect lines:

 [ERROR] InnoDB: Ignoring the redo log due to missing MLOG_CHECKPOINT between the checkpoint 97983975202 and the end 97984446291.
 [ERROR] InnoDB: Plugin initialization aborted with error Generic error
 [ERROR] Plugin 'InnoDB' init function returned error.
 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
 [ERROR] Failed to initialize plugins.
 [ERROR] Aborting
 [Note] Binlog end
 [Note] /usr/sbin/mysqld: Shutdown complete

OK, so it seems the redo log is corrupt. Well, if the redo log is corrupt, let's remove it! I'm not worried about the data that was being imported. I can re-run the full import again later.

 sudo rm /var/lib/mysql/ib_logfile0
 sudo rm /var/lib/mysql/ib_logfile1

Trying to stop/start the MySQL process still didn't work but after removing the redo log I had a different error in the log file (/var/log/mysql/error.log):

 [ERROR] InnoDB: Your database may be corrupt or you may have copied the InnoDB tablespace but not the InnoDB log files. Please refer to http://dev.mysql.com/doc/refman/5.7/en/forcing-innodb-recovery.html for information about forcing recovery.

Checking the link informed in the error message it seemed the right thing to do was to go with the innodb_force_recovery parameter in the config file (my config file was located at /etc/mysql/mysql.conf.d/mysqld.cnf ). I started with 1, as suggested, but I was able to start the MySQL server only when got to innodb_force_recovery=4.

 [mysqld]
 innodb_force_recovery = 4

After starting the MySQL server and being able to read all the databases and tables (but not write, as innodb_force_recovery = 4 put them in read-only mode) I dumped all my data:

 mysqldump -u root -p --all-databases > all_db_local.sql

Having all my data saved in a file, I completely removed my current MySQL installation (and all databases) and installed a new one:
 sudo rm -rf /var/lib/mysql/
 sudo apt-get remove --purge mysql-server mysql-client mysql-common
 sudo apt-get autoremove
 sudo apt-get autoclean
 sudo apt-get install mysql-server
 sudo apt-get install libmysqlclient-dev

With a fresh MySQL install all I had to do was bring my data back in:

 mysql -u root < ~/all_db_local.sql

It took some time but, in the end, I had my MySQL server back to operation.

Friday, September 22, 2017

Good news, everyone!

Some articles that caught my attention lately:

Feudalism and the “Algorithmic Economy”
https://medium.com/@ebonstorm/feudalism-and-the-algorithmic-economy-62d6c5d90646

Employees Who Stay In Companies Longer Than Two Years Get Paid 50% Less
https://www.forbes.com/sites/cameronkeng/2014/06/22/employees-that-stay-in-companies-longer-than-2-years-get-paid-50-less/#3e5020e4e07f

Machine Learning Crash Course: Part 4 - The Bias-Variance Dilemma
https://ml.berkeley.edu/blog/2017/07/13/tutorial-4/

The future of deep learning
https://blog.keras.io/the-future-of-deep-learning.html

The limitations of deep learning
https://blog.keras.io/the-limitations-of-deep-learning.html

The myth of drug expiration dates
https://www.propublica.org/article/the-myth-of-drug-expiration-dates

Drug Companies & Doctors: A Story of Corruption
http://www.nybooks.com/articles/2009/01/15/drug-companies-doctorsa-story-of-corruption/

The Dark Secret at the Heart of AI
https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

In Game Theory, No Clear Path to Equilibrium
https://www.quantamagazine.org/in-game-theory-no-clear-path-to-equilibrium-20170718/

Agents that imagine and plan
https://deepmind.com/blog/agents-imagine-and-plan/

Psychology of Intelligence Analysis
https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/psychology-of-intelligence-analysis/

Monday, July 17, 2017

Note to myself: do not forget iptables on AWS!

Last week I spent a few hours struggling against an Oracle database connectivity problem on a RedHat machine on AWS that almost made me jump from the window.

The database seemed OK but I was not able to remote connect to it.

To start with, the usual verifications:

 - Are Oracle env variables correctly configured?
 env | grep -i ora 
 - Is the DB up and running?
 ps -ef | grep -i pmon 
 - Are the listeners up and running?
 lsnrctl status
 netstat -ln
 - Can I connect to the DB?
 sqlplus <user>/<passwd>@<host>:<port>/<service>

Everything seemed fine but I was still not able to remote connect to the database. I could connect locally but not remotely.

So I checked my tnsnames.ora. I've had this problem before, the listener which listens only on localhost (127.0.0.1). That was not the problema now, though. The listener was correctly configured on tnsnames.ora.

Then I turn my attention to AWS security groups. Could it be it was blocking my incoming connections? I checked everything and it looked fine. But I still couldn't connect remotely. I changed a few permissions. No luck. Then I completely open the machine to the world. Still no luck.

I was almost filling a bug report on AWS security groups when I decided to look for local firewalls. Bingo! Iptables was the culprit. Problem solved simply flushing all iptables rules:
 sudo iptables -F
To make it persistent:
 sudo service iptables save

Lesson learned: even though it seems completely illogical to me to have iptables blocking connections on a AWS machine (which is already behind the security groups rules) some people do it and there are even some AMIs which come bundled that way. So, always check iptables (and/or other local firewalls) if your security groups rules seem correct but you can't connect to your instance!

Wednesday, July 5, 2017

Unlock Oracle DB account


To unlock a locked Oracle DB account:

1) Find the locked account:

  select username,account_status from dba_users where account_status like '%LOCK%';

2) Unlock the user:

  alter user <USER> account unlock;

In my case the server was a development database and locked accounts were just an unnecessary hassle since I don't need any security on this database. So, to avoid the account being locked again in the future I changed some parameters of the DEFAULT profile (the profile used by this user in my database):

1) Check the profile used by the user:

  select profile from dba_users where user=<USER>;


2) Change the parameters related to password expiration and failed login attempts:

 
 alter profile <PROFILE> limit FAILED_LOGIN_ATTEMPTS unlimited;

 alter profile <PROFILE> limit PASSWORD_LIFE_TIME unlimited;

 alter profile <PROFILE> limit PASSWORD_REUSE_TIME unlimited;

 alter profile <PROFILE> limit PASSWORD_REUSE_MAX unlimited;

 alter profile <PROFILE> limit PASSWORD_LOCK_TIME unlimited;

 alter profile <PROFILE> limit PASSWORD_GRACE_TIME unlimited;


Obs1: You must have ALTER PROFILE system privilege to change profile resource limits. To modify password limits and protection, you must have ALTER PROFILE and ALTER USER system privileges.

Obs2: Don't do this in your production server or any server where data loss is not acceptable! In such cases, follow Oracle security practices.

Tuesday, July 4, 2017

Good news, everyone! (aka articles/posts worth reading according to me)

Sometimes it seems impossible to find useful, creative or entertaining information in the huge pile of trash that the Internet has become. But there are plenty of good articles/posts out there. I've decided to post here some of the ones that caught my attention lately and I've decided to call this column "Good news, everyone!", in honor of professor Farnsworth :) (Futurama, in case you didn't get the reference).

Deep Learning in the Stock Market

A Mathematician's Secret: We're Not All Geniuses

How to read and understand a scientific paper: a guide for non-scientists

How long should peer review take?

The Limits of the CAP Theorem

GitHub Secrets

What papers should everyone read?

Learn to Read Code

Tuesday, June 20, 2017

Oracle character sets and VARCHAR size

A few weeks ago one of my work mates contacted me with a problem that seemed particularly weird for him. A database column had a limit of 200 characters but updates would fail sometimes with "value too large for column" even though the application was also limiting the input to 200 characters.

The problem is actually quite simple, but it can be confusing if you are not familiar with 2 concepts: database character sets and the way Oracle limits a VARCHAR column.

First, character sets. Oracle has 2 types of character sets based on the type of encoding used:
  • Single-byte character sets (e.g. US7ASCII and WE8ISO8859P1)
  • Multibyte character sets (e.g. AL32UTF8)
A single-byte character set, as the name implies, uses only one byte to represent each of the supported characters. A multibyte character set, on the other hand, can use more than one byte to represent one character.

Now the problem is clear. The application was limiting the input to 200 characters but these characters used up more than 200 bytes in the database, causing the error. But why is Oracle limiting the column to 200 bytes instead of 200 characters as expected?

As a matter of fact Oracle can limit the size of your VARCHAR columns both ways. If you want to limit the size in bytes:

    mycolumn VARCHAR2(10 BYTE)

If you want to limit the size in characters:

    mycolumn VARCHAR2(10 CHAR)

The catch here is that the default behavior is limit in bytes, contrary to what one would expect. So, if you don't specify which way to limit your VARCHAR2 columns Oracle will limit them in bytes. Beware of that!

    mycolumn VARCHAR2(10) is equivalent to mycolumn VARCHAR2(10 BYTE)

Obs: this default behavior can be changed by setting the NLS_LENGTH_SEMANTICS parameter.

I find this behavior a bit weird and completely understands why my work mate was a bit confused. 

If you want more information on this, here's a link to a good Oracle document:

Wednesday, January 11, 2017

impdp remap to a different schema on target machine

If you have a full schema exported using expdp and want to import it in a different schema on the target machine, you have to use the remap_schema option.

Example:

impdp <user>/<passwd> DUMPFILE=my_dmp_file.dmp remap_schema=<schema_exported>:<schema_where_I_want_to_pump_the_data_into>

It's not necessary to create the user <schema_where_I_want_to_pump_the_data_into> beforehand, the impdp utility will create it for you.

ORA-39087: directory name MY_DATA_PUMP_DIRECTORY is invalid

When performing a database export using the expdp utility you can run into an error like:

expdp <user>/<passwd> schemas=<schema_to_be_exported> dumpfile=my_dmp_file.dmp logfile=my_log_file.log

ORA-39002: invalid operation 
ORA-39070: Unable to open the log file. 
ORA-39087: directory name MY_DATA_PUMP_DIRECTORY is invalid

That happens because the current user default output directory is not defined.

To solve that, grant “create any directory” privilege to your user:


SQL> grant create any directory to <user>;

Grant succeeded.


And then create a directory to be used by the expdp utility:


SQL> create directory my_data_pump_directory as '/tmp/db_dmp';

Directory created.