-
Notifications
You must be signed in to change notification settings - Fork 11
samples2
1-projects/Python-OOP-Toy-master/
Table of Contents generated with DocToc
Note for Windows users: WSL won't work for this module!
"Object-oriented programming (OOP) is a programming paradigm based on the concept of "objects", which may contain data, in the form of fields, often known as attributes; and code, in the form of procedures, often known as methods. A feature of objects is that an object's procedures can access and often modify the data fields of the object with which they are associated (objects have a notion of "this" or "self"). In OOP, computer programs are designed by making them out of objects that interact with one another.[1][2]" --Wikipedia
In English, this means that in OOP, code is organized in logical and self contained parts that contain within them everything needed to create, store, and manipulate one very specific element of the program. When this element is needed, a copy of it is initialized according to the instructions within. This is called an object.
As with all things programming, the specific vocabulary varies from language to language, or even programmer to programmer. Some Python vocabulary:
Class: The top level organization structure in OOP. This contains all of the instructions and storage for the operations of this part of the program. A class should be self contained and all variables within the class should only be modified by methods within the class.
Method: A function that belongs to a specific class.
Constructor: A special method, defined with init() that is used to instantiate an object of this class.
Inheritance: Perhaps the most important concept in OOP, a class may inherit from another class. This gives the child class all of the variables and methods found in the parent class, or classes, automatically.
Override: If a child class needs to function slightly differently than objects of the parent class, this can be done by giving the child class a method with the same name as one found in the parent. This method will override the one defined in the parent class. Often, this is done to add child specific functionality to the method before calling the parent version of the method using super().foo(). This is commonly done with the init() method.
Self: In Python, a class refers to class-level variables and methods with the keyword self
. These have scope across the entire class. Variables may also be declared normally and will have scope limited to the block of code they are declared within.
This project will demonstrate the core concepts of OOP by using a library called pygame to create a toy similar to early screensavers.
For initial setup, run:
pipenv install
pipenv shell
Then to run, use: python src/draw.py
Your instructor will demonstrate the above concepts by extending the Block
class
Fill out the stubs in ball.py
to extend the functionality of the ball
class.
Implement simple physics to enable balls to bounce off of one another, or off of blocks. This will be HARD. If you get it 'sort of working' in any form, consider yourself to have accomplished an impressive feat!
- If
pipenv install
is taking forever or erroring with TIMEOUT messages, disable your antivirus software. - If
pipenv install
is puking on installing pygame:- Don't use
pipenv
for this project. Noinstall
, noshell
. - Download the appropriate
.whl
file from here.- Python 3.6 use the
cp36
version. Python 3.7 usecp37
, etc. Usepython --version
to check your version. - Try the
win32
version first. If that doesn't work, the AMD version. - E.g.
pygame‑1.9.3‑cp36‑cp36m‑win32.whl
- Python 3.6 use the
- Install it with
You'll need to specify the full path, likely.
pip install pygame-[whatever].whl
- Once it's installed, run the game from the
src/
directory withpython draw.py
- Don't use
- If you're getting errors about
InvalidMarker
:- Don't use
pipenv
for this project. Noinstall
, noshell
. - Run
pip3 install pygame
- Once it's installed, run the game from the
src/
directory withpython3 draw.py
- Don't use
1-projects/React-Todo-Solution-master/
Table of Contents generated with DocToc
-
Single Page Application
-
Compilers
-
Bundlers
-
Elements
-
Components
-
JSX
-
Package Mangers
-
CDN
-
Props and State
Table of Contents generated with DocToc
- API Documentation
- Documentation on JSX
-
Objective: At this point you have become familiar with the DOM and have built out User Interfaces HTML and CSS and some custom components. Now we're going to dive into modern front-end JavaScript development by learning about ReactJS.
-
You're going to be building a ToDo App (please hold your applause).
-
We know this may seem trivial, but the best part about this assignment is that is shows off some of the strengths of React and you can also take it as far as want so don't hold back on being creative.
-
Tool requirements
- React Dev Tools - This is a MUST you need to install this asap!
- We have everything you need for your React Developer environment in this file. We went over this in the lecture video.
- node and npm
-
npm install
will pull in all the node_modules you need once youcd
into the root directory of the project -
npm start
will start a development server on your localhost at port 3000. -
npm test
will run the tests that are included in the the project. Try to get as many of these passing as you can in the allotted time.
Your job is to write the components to complete the Todo List application and getting as many of the tests to pass as you can. The tests are expecting that you have a TodoList component that renders a Todo component for each todo item. The requirements for your Todo List app is that it should have an input field that a user can type text into and submit data in the input field in order to create a new todo item. Aside from being able to add todos, you should be able to mark any todo in the list as 'complete'. In other words, a user should be able to click on any of the todos in the list and have a strikethrough go through the individual todo. This behavior should be toggle-able, i.e. a todo item that has a strikethrough through it should still be clickable in order to allow completed items to no longer be marked as 'completed'. Once you've finished your components, you'll need to have the root App
component render your TodoList component.
- All components you implement should go in the
src/components
directory. - The components should be named
App.js
,TodoList.js
andTodo.js
(as those are the files being imported into the tests). - Think of your application as an Application Tree. App is the parent, which controlls properties/data needed for the child components. This is how modern applications are built. They're modular, separate pieces of code called components that you 'compose' together to make your app. It's awesome!
- Be sure to keep your todos in an array on state. Arrays are so awesome to work with.
- When you need to iterate over a list and return React components out as elements, you'll need to include a "key" property on the element itself.
<ElementBeingRendered key={someValue} />
. Note: this is what react is doing under the hood, it needs to know how to access each element and they need to be unique so the React engine can do its thing. An example snippet that showcases this may look something like this:
this.state.todos.map((todo, i) => <AnotherComponent key={i} todo={todo} />);
Here, we're simply passing the index of each todo item as the key for the individual React component.
- Feel free to structure your "todo" data however you'd like. i.e. strings, objects etc.
- React will give you warnings in the console that urge you to squash React Anti-Patterns. But if something is completely off, you'll get stack trace errors that will force your bundle to freeze up. You can look for these in the Chrome console.
- Refactor each todo to be an object instead of a just a string. For example,
todo: {'text': 'Shop for food, 'completed': false}
and when a user clicks on a todo, switch that completed flag to true. Ifcompleted === true
, this should toggle the strikethrough on the 'completed' todo. The toggling functionality should work the same as when each todo was just a string. - Add the ability to delete a todo. The way this would work is each todo item should have an 'x' that should be clickable and that, when clicked, should remove the todo item from the state array, which will also remove it from the rendered list of todos.
- Take your App's styles to the next level. Start implementing as much creativity here as you'd like. You can build out your styles on a component-by-component basis eg
App.js
has a file next to it in the directory tree calledApp.scss
and you define all your styles in that file. Be sure to @import these styles into theindex.scss
file. - Persist your data in
window.localStorage()
hint: you may have to pass your data to a stringifier to get it to live inside thelocalStorage()
of the browser. This will cause it to persist past the page refresh.
1-projects/Relational-Databases-master/
Table of Contents generated with DocToc
-
Relational Databases and PostgreSQL
- Contents
- What is a relational database?
- Relational vs NoSQL
- PostgreSQL
- SQL, Structured Query Language
-
The
WHERE
Clause - Column Data Types
- ACID and CRUD
- NULL and NOT NULL
- COUNT
- ORDER BY
- GROUP BY
- Keys: Primary, Foreign, and Composite
- Auto-increment Columns
- Joins
- Indexes
- Transactions
- The EXPLAIN Command
- Quick and Dirty DB Design
- Normalization and Normal Forms
- Node-Postgres
- Security
- Other Relational Databases
- Assignment: Install PostgreSQL
- Assignment: Create a Table and Use It
- Assignment: NodeJS Program to Create and Populate a Table
- Assignment: Command-line Earthquake Query Tool
- Assignment: RESTful Earthquake Data Server
- What is a relational database?
- Relational vs NoSQL
- PostgreSQL
- SQL, Structured Query Language
- Column Data Types
- ACID and CRUD
- NULL and NOT NULL
- COUNT
- ORDER BY
- GROUP BY
- Keys: Primary, Foreign, and Composite
- Auto-increment Columns
- Joins
- Indexes
- Transactions
- The EXPLAIN Command
- Quick and Dirty DB Design
- Normalization and Normal Forms
- Node-Postgres
- Security
- Other Relational Databases
- Assignment: Install PostgreSQL
- Assignment: Create a Table and Use It
- Assignment: NodeJS Program to Create and Populate a Table
- Assignment: Command-line Earthquake Query Tool
- Assignment: RESTful Earthquake Data Server
Data stored as row records in tables. Imagine a spreadsheet with column headers describing the contents of each column, and each row is a record.
A database can contain many tables. A table can contain many rows. A row can contain many columns.
Records are related to those in different tables through common columns that are present in both tables.
For example, an Employee
table might have the following columns in
each record:
Employee
EmployeeID FirstName LastName DepartmentID
And a Department
table might have the following columns in each
record:
Department
DepartmentID DepartmentName
Notice that both Employee
and Department
have a DepartmentID
column. This common column relates the two tables and can be used to
join them together with a query.
The structure described by the table definitions is known as the schema.
Compare to NoSQL databases that work with key/value pairs or are document stores.
NoSQL is a term that refers to non-relational databases, most usually document store databases. (Though it can apply to almost any kind of non-relational database.)
MongoDB is a great example of a NoSQL database.
Unfortunately, there are no definitive rules on when to choose one or the other.
Do you need ACID-compliance? Consider a relational database.
Does your schema (structure of data) change frequently? Consider NoSQL.
Does absolute consistency in your data matter, e.g. a bank, inventory management system, employee management, academic records, etc.? Consider a relational database.
Do you need easy-to-deploy high-availability? Consider NoSQL.
Do you need transactions to happen atomically? (The ability to update multiple records simultaneously?) Consider a relational database.
Do you need read-only access to piles of data? Consider NoSQL.
PostgreSQL is a venerable relational database that is freely available and world-class.
- Assignment: Install PostgreSQL
SQL ("sequel") is the language that people use for interfacing with relational databases.
A database is made up of a number of tables. Let's create a table using
SQL in the shell. Be sure to end the command with a semicolon ;
.
(Note: SQL commands are often capitalized by convention, but can be lowercase.)
$ psql
psql (10.1)
Type "help" for help.
dbname=> CREATE TABLE Employee (ID INT, LastName VARCHAR(20));
Use the \dt
command to show which tables exist:
dbname=> CREATE TABLE Employee (ID INT, LastName VARCHAR(20));
CREATE TABLE
dbname=> \dt
List of relations
Schema | Name | Type | Owner
--------+----------+-------+-------
public | employee | table | beej
(1 row)
Use the \d
command to see what columns a table has:
dbname=> \d Employee
Table "public.employee"
Column | Type | Collation | Nullable | Default
--------------+-----------------------+-----------+----------+---------
id | integer | | |
lastname | character varying(20) | | |
dbname=> INSERT INTO Employee (ID, LastName) VALUES (10, 'Tanngnjostr');
INSERT 0 1
You can omit the column names if you're putting data in every column:
dbname=> INSERT INTO Employee VALUES (10, 'Tanngnjostr');
INSERT 0 1
Run some more inserts into the table:
INSERT INTO Employee VALUES (11, 'Alice');
INSERT INTO Employee VALUES (12, 'Bob');
INSERT INTO Employee VALUES (13, 'Charlie');
INSERT INTO Employee VALUES (14, 'Dave');
INSERT INTO Employee VALUES (15, 'Eve');
You can query the table with SELECT
.
Query all the rows and columnts:
dbname=> SELECT * FROM Employee;
id | lastname
----+-------------
10 | Tanngnjostr
11 | Alice
12 | Bob
13 | Charlie
14 | Dave
15 | Eve
(6 rows)
With SELECT
, *
means "all columns".
You can choose specific columns:
dbname=> SELECT LastName FROM Employee;
lastname
-------------
Tanngnjostr
Alice
Bob
Charlie
Dave
Eve
(6 rows)
And you can search for specific rows with the WHERE
clause:
dbname=> SELECT * FROM Employee WHERE ID=12;
id | lastname
----+----------
12 | Bob
(1 row)
dbname=> SELECT * FROM Employee WHERE ID=14 OR LastName='Bob';
id | lastname
----+----------
12 | Bob
14 | Dave
(2 rows)
Finally, you can rename the output columns, if you wish:
SELECT id AS Employee ID, LastName AS Name
FROM Employee
WHERE ID=14 OR LastName='Bob';
Employee ID | Name
-------------+----------
12 | Bob
14 | Dave
The UPDATE
command can update one or many rows. Restrict which rows
are updated with a WHERE
clause.`
dbname=> UPDATE Employee SET LastName='Harvey' WHERE ID=10;
UPDATE 1
dbname=> SELECT * FROM Employee WHERE ID=10;
id | lastname
----+----------
10 | Harvey
(1 row)
You can update multiple columns at once:
dbname=> UPDATE Employee SET LastName='Octothorpe', ID=99 WHERE ID=14;
UPDATE 1
Delete from a table with the DELETE
command. Use a WHERE
clause to
restrict the delete.
CAUTION! If you don't use a WHERE
clause, all rows will be deleted
from the table!
Delete some rows:
dbname=> DELETE FROM Employee WHERE ID >= 15;
DELETE 2
Delete ALL rows (Danger, Will Robinson!):
dbname=> DELETE FROM Employee;
DELETE 4
If you want to get rid of an entire table, use DROP
.
WARNING! There is no going back. Table will be completely blown away. Destroyed ...by the Empire.
dbname=> DROP TABLE Employee;
DROP TABLE
- Assignment: Create a Table and Use It
You've already seen some examples of how WHERE
affects SELECT
,
UPDATE
, and DELETE
.
Normal operators like <
, >
, =
, <=
, >=
are available.
For example:
SELECT * from animals
WHERE age >= 10;
You can add more boolean logic with AND
, OR
, and affect precedence
with parentheses:
SELECT * from animals
WHERE age >= 10 AND type = 'goat';
SELECT * from animals
WHERE age >= 10 AND (type = 'goat' OR type = 'antelope');
The LIKE
operator can be used to do pattern matching.
_ -- Match any single character
% -- Match any sequence of characters
To select all animals that start with ab
:
SELECT * from animal
WHERE name LIKE 'ab%';
You probably noticed a few data types we specified with CREATE TABLE
,
above. PostgreSQL has a lot of data
types.
This is an incomplete list of some of the more common types:
VARCHAR(n) -- Variable character string of max length n
BOOLEAN -- TRUE or FALSE
INTEGER -- Integer value
INT -- Same as INTEGER
DECIMAL(p,s) -- Decimal number with p digits of precision
-- and s digits right of the decimal point
REAL -- Floating point number
DATE -- Holds a date
TIME -- Holds a time
TIMESTAMP -- Holds an instant of time (date and time)
BLOB -- Binary object
These are two common database terms.
Short for Atomicity, Consistency, Isolation, Durability. When people mention "ACID-compliance", they're generally talking about the ability of the database to accurately record transactions in the case of crash or power failure.
Atomicity: all transactions will be "all or nothing".
Consistency: all transactions will leave the database in a consistent state with all its defined rules and constraints.
Isonlation: the results of concurrent transactions is the same as if those transactions had been executed sequentially.
Durability: Once a transaction is committed, it will remain committed, despite crashes, power outages, snow, and sleet.
Short for Create, Read, Update, Delete. Describes the four basic functions of a data store.
In a relational database, these functions are handled by INSERT
,
SELECT
, UPDATE
, and DELETE
.
Columns in records can sometimes have no data, referred to by the
special keyword as NULL
. Sometimes it makes sense to have NULL
columns, and sometimes it doesn't.
If you explicitly want to disallow NULL columns in your table, you can
create the columns with the NOT NULL
constraint:
CREATE TABLE Employee (
ID INT NOT NULL,
LastName VARCHAR(20));
You can select a count of items in question with the COUNT
operator.
For example, count the rows filtered by the WHERE
clause:
SELECT COUNT(*) FROM Animals WHERE legcount >= 4;
count
-------
5
Useful with GROUP BY
, below.
ORDER BY
which sorts SELECT
results for you. Use DESC
to sort in
reverse order.
SELECT * FROM Pets
ORDER BY age DESC;
name | age
-----------+-----
Rover | 9
Zaphod | 4
Mittens | 3
When used with an aggregating function like COUNT
, can be
used to produce groups of results.
Count all the customers in certain countries:
SELECT COUNT(CustomerID), Country
FROM Customers
GROUP BY Country;
COUNT(CustomerID) | Country
----------------------+-----------
1123 | USA
734 | Germany
etc.
Rows in a table often have one column that is called the primary key.
The value in this column applies to all the rest of the data in the
record. For example, an EmployeeID
would be a great primary key,
assuming the rest of the record held employee information.
Employee
ID (Primary Key) LastName FirstName DepartmentID
To create a table and specify the primary key, use the NOT NULL
and
PRIMARY KEY
constraints:
CREATE TABLE Employee (
ID INT NOT NULL PRIMARY KEY,
LastName VARCHAR(20),
FirstName VARCHAR(20),
DepartmentID INT);
You can always search quickly by primary key.
If a key refers to a primary key in another table, it is called a foreign key (abbreviated "FK"). You are not allowed to make changes to the database that would cause the foreign key to refer to a non-existent record.
The database uses this to maintain referential integrity.
Create a foreign key using the REFERENCES
constraint. It specifies the
remote table and column the key refers to.
CREATE TABLE Department (
ID INT NOT NULL PRIMARY KEY,
Name VARCHAR(20));
CREATE TABLE Employee (
ID INT NOT NULL PRIMARY KEY,
LastName VARCHAR(20),
FirstName VARCHAR(20),
DepartmentID INT REFERENCES Department(ID));
In the above example, you cannot add a row to Employee
until that
DepartmentID
already exists in Department
's ID
.
Also, you cannot delete a row from Department
if that row's ID
was a
DepartmentID
in Employee
.
Keys can also consist of more than one column. Composite keys can be created as follows:
CREATE TABLE example (
a INT,
b INT,
c INT,
PRIMARY KEY (a, c));
These are columns that the database manages, usually in an ever-increasing sequence. It's perfect for generation unique, numeric IDs for primary Keys.
In some databases (e.g MySQL) this is done with an
AUTO_INCREMENT
keyword. PostgreSQL is different.
In PostgreSQL, use the SERIAL
keyword to auto-generate sequential
numeric IDs for records.
CREATE TABLE Company (
ID SERIAL PRIMARY KEY,
Name VARCHAR(20));
When you insert, do not specify the ID column. You must however, give a column name list that includes the remaining column names you are inserting data for. The ID column will be automatically generated by the database.
INSERT INTO Company (Name) VALUES ('My Awesome Company');
This concept is extremely important to understanding how to use relational databases!
When you have two (or more) tables with data you wish to retrieve from both, you do so by using a join. These come in a number of varieties, some of which are covered here.
When you're using SELECT
to make the join between two tables, you can
specify the tables specific columns are from by using the .
operator.
This is especially useful when columns have the same name in the
different tables:
SELECT Animal.name, Farm.name
FROM Animal, Farm
WHERE Animal.FarmID = Farm.ID;
Tables to use in these examples:
CREATE TABLE Department (
ID INT NOT NULL PRIMARY KEY,
Name VARCHAR(20));
CREATE TABLE Employee (
ID INT NOT NULL PRIMARY KEY,
Name VARCHAR(20),
DepartmentID INT);
INSERT INTO Department VALUES (10, 'Marketing');
INSERT INTO Department VALUES (11, 'Sales');
INSERT INTO Department VALUES (12, 'Entertainment');
INSERT INTO Employee VALUES (1, 'Alice', 10);
INSERT INTO Employee VALUES (2, 'Bob', 12);
INSERT INTO Employee VALUES (3, 'Charlie', 99);
NOTE: Importantly, department ID 11 is not referred to from
Employee
, and department ID 99 (Charlie) does not exist in
Department
. This is instrumental in the following examples.
This is the most commonly-used join, by far, and is what people mean when they just say "join" with no further qualifiers.
This will return only the rows that match the requirements from both tables.
For example, we don't see "Sales" or "Charlie" in the join because neither of them match up to the other table:
dbname=> SELECT Employee.ID, Employee.Name, Department.Name
FROM Employee, Department
WHERE Employee.DepartmentID = Department.ID;
id | name | name
----+-------+---------------
1 | Alice | Marketing
2 | Bob | Entertainment
(2 rows)
Above, we used a WHERE
clause to perform the inner join. This is
absolutely the most common way to do it.
There is an alternative syntax, below, that is barely ever used.
dbname=> SELECT Employee.ID, Employee.Name, Department.Name
FROM Employee INNER JOIN Department
ON Employee.DepartmentID = Department.ID;
id | name | name
----+-------+---------------
1 | Alice | Marketing
2 | Bob | Entertainment
(2 rows)
This join works like an inner join, but also returns all the rows from
the "left" table (the one after the FROM
clause). It puts NULL
in for the missing values in the "right" table (the one after the
LEFT JOIN
clause.)
Example:
dbname=> SELECT Employee.ID, Employee.Name, Department.Name
FROM Employee LEFT JOIN Department
ON Employee.DepartmentID = Department.ID;
id | name | name
----+---------+---------------
1 | Alice | Marketing
2 | Bob | Entertainment
3 | Charlie |
(3 rows)
Notice that even though Charlie's department isn't found in Department
, his record is still listed with a NULL
department name.
This join works like an inner join, but also returns all the rows from
the "right" table (the one after the RIGHT JOIN
clause). It puts
NULL
in for the missing values in the "right" table (the one after the
FROM
clause.)
dbname=> SELECT Employee.ID, Employee.Name, Department.Name
FROM Employee RIGHT JOIN Department
ON Employee.DepartmentID = Department.ID;
id | name | name
----+-------+---------------
1 | Alice | Marketing
2 | Bob | Entertainment
| | Sales
(3 rows)
Notice that even though there are no employees in the Sales department,
the Sales name is listed with a NULL
employee name.
This is a blend of a Left and Right Outer Join. All information from
both tables is selected, with NULL
filling the gaps where necessary.
dbname=> SELECT Employee.ID, Employee.Name, Department.Name
FROM Employee
FULL JOIN Department
ON Employee.DepartmentID = Department.ID;
id | name | name
----+---------+---------------
1 | Alice | Marketing
2 | Bob | Entertainment
3 | Charlie |
| | Sales
(4 rows)
When searching through tables, you use a WHERE
clause to narrow things
down. For speed, the columns mentioned in the WHERE
clause should
either be a primary key, or a column for which an index has been
built.
Indexes help speed searches. In a large table, searching over an unindexed column will be slow.
Example of creating an index on the Employee table from the Keys section:
dbname=> CREATE INDEX ON Employee (LastName);
CREATE INDEX
dbname=> \d Employee
Table "public.employee"
Column | Type | Collation | Nullable | Default
--------------+-----------------------+-----------+----------+---------
id | integer | | not null |
lastname | character varying(20) | | |
firstname | character varying(20) | | |
departmentid | integer | | |
Indexes:
"employee_pkey" PRIMARY KEY, btree (id)
"employee_lastname_idx" btree (lastname)
Foreign-key constraints:
"employee_departmentid_fkey" FOREIGN KEY (departmentid) REFERENCES department(id)
In PostgreSQL, you can bundle a series of statements into a transaction. The transaction is executed atomically, which means either the entire transaction occurs, or none of the transaction occurs. There will never be a case where a transaction partially occurs.
Create a transaction by starting with a BEGIN
statement, followed by
all the statements that are to be within the transaction.
START TRANSACTION
is generally synonymous withBEGIN
in SQL.
To execute the transaction ("Let's do it!"), end with a COMMIT
statement.
To abort the transaction and do nothing ("On second thought,
nevermind!") end with a ROLLBACK
statement. This makes it like
nothing within the transaction ever happened.
Usually transactions happen within a program that checks for sanity and either commits or rolls back.
Pseudocode making DB calls that check if a rollback is necessary:
db("BEGIN"); // Begin transaction
db(`UPDATE accounts SET balance = balance - 100.00
WHERE name = 'Alice'`);
let balance = db("SELECT balance WHERE name = 'Alice'");
// Don't let the balance go below zero:
if (balance < 0) {
db("ROLLBACK"); // Never mind!! Roll it all back.
} else {
db("COMMIT"); // Plenty of cash
}
In the above example, the UPDATE
and SELECT
must happen at the same
time (atomically) or else another process could sneak in between and
withdraw too much money. Because it needs to be atomic, it's wrapped in
a transaction.
If you just enter a single SQL statement that is not inside a BEGIN
transaction block, it gets automatically wrapped in a BEGIN
/COMMIT
block. It is a mini transaction that is COMMIT
ted immediately.
Not all SQL databases support transactions, but most do.
The EXPLAIN
command will tell you how much time the database is
spending doing a query, and what it's doing in that time.
It's a powerful command that can help tell you where you need to add indexes, change structure, or rewrite queries.
dbname=> EXPLAIN SELECT * FROM foo;
QUERY PLAN
---------------------------------------------------------
Seq Scan on foo (cost=0.00..155.00 rows=10000 width=4)
(1 row)
For more information, see the PostgreSQL EXPLAIN documentation
Designing a non-trivial database is a difficult, learned skill best left to professionals. Feel free to do small databases with minimal training, but if you get in a professional situation with a large database that needs to be designed, you should consult with people with strong domain knowledge.
That said, here are a couple pointers.
-
In general, all your tables should have a unique
PRIMARY KEY
for each row. It's common to useSERIAL
orAUTO_INCREMENT
to make this happen. -
Keep an eye out for commonly duplicated data. If you are duplicating text data across several records, consider that maybe it should be in its own table and referred to with a foreign key.
-
Watch out for unrelated data in the same record. If it's a record in the
Employee
table but it hasDepartment_Address
as a column, that probably belongs in aDepartment
table, referred to by a public key.
But if you really want to design database, read on to the Normalization and Normal Forms section.
[This topic is very deep and this section cannot do it full justice.]
Normalization is the process of designing or refactoring your tables for maximum consistency and minimum redundancy.
With NoSQL databases, we're used to denormalized data that is stored with speed in mind, and not so much consistency (sometimes NoSQL databases talk about eventual consistency).
Non-normalized tables are considered an anti-pattern in relational databases.
There are many normal forms. We'll talk about First, Second, and Third normal forms.
One of the reasons for normalizing tables is to avoid anomalies.
Insert anomaly: When we cannot insert a row into the table because some of the dependent information is not yet known. For example, we cannot create a new class record in the school database, because the record requires at least one student, and none have enrolled yet.
Update anomaly: When information is duplicated in the database and some rows are updated but not others. For example, say a record contains a city and a zipcode, but then the post office changes the zipcode. If some of the records are updated but not others, some cities will have the old zipcodes.
Delete anomaly: The opposite of an insert anomaly. When we delete some information and other related information must also be deleted against our will. For example, deleting the last student from a course causes the other course information to be also deleted.
By normalizing your tables, you can avoid these anomalies.
When a database is in first normal form, there is a primary key for each row, and there are no repeating sets of columns that should be in their own table.
Unnormalized (column titles on separate lines for clarity):
Farm
ID
AnimalName1 AnimalBreed1 AnimalProducesEggs1
AnimalName2 AnimalBreed2 AnimalProducesEggs2
1NF:
Farm
ID
Animal
ID FarmID[FK Farm(ID)] Name Breed ProducesEggs
Use a join to select all the animals in the farm:
SELECT Name, Farm.ID FROM Animal, Farm WHERE Farm.ID = Animal.FarmID;
To be in 2NF, a table must already be in 1NF.
Additionally, all non-key data must fully relate to the key data in the table.
In the farm example, above, Animal has a Name and a key FarmID, but these two pieces of information are not related.
We can fix this by adding a table to link the other two tables together:
2NF:
Farm
ID
FarmAnimal
FarmID[FK Farm(ID)] AnimalID[FK Animal(ID)]
Animal
ID Name Breed ProducesEggs
Use a join to select all the animals in the farms:
SELECT Name, Farm.ID
FROM Animal, FarmAnimal, Farm
WHERE Farm.ID = FarmAnimal.FarmID AND
Animal.ID = FarmAnimal.AnimalID;
A table in 3NF must already be in 2NF.
Additionally, columns that relate to each other AND to the key need to be moved into their own tables. This is known as removing transitive dependencies.
In the Farm example, the columns Breed
and ProducesEggs
are related.
If you know the breed, you automatically know if it produces eggs or
not.
3NF:
Farm
ID
FarmAnimal
FarmID[FK Farm(ID)] AnimalID[FK Animal(ID)]
BreedEggs
Breed ProducesEggs
Animal
ID Name Breed[FK BreedEggs(Breed)]
Use a join to select all the animals names that produce eggs in the farm:
SELECT Name, Farm.ID
FROM Animal, FarmAnimal, BreedEggs, Farm
WHERE Farm.ID = FarmAnimal.FarmID AND
Animal.ID = FarmAnimal.AnimalID AND
Animal.Breed = BreedEggs.Breed AND
BreedEggs.ProducesEggs = TRUE;
This is a library that allows you to interface with PostgreSQL through NodeJS.
Its documentation is exceptionally good.
You might have noticed that you don't need a password to access your database that you created. This is because PostgreSQL by default uses something called peer authentication mode.
In a nutshell, it makes sure that you are logged in as yourself before you access your database. If a different user tries to access your database, they will be denied.
If you need to set up password access, see client authentication in the PostgreSQL manual
When writing code that accesses databases, there are a few rules you should follow to keep things safe.
-
Don't store database passwords or other sensitive information in your code repository. Store dummy credentials instead.
-
When building SQL queries in code, use parameterized queries. You build your query with parameter placeholders for where the query arguments will go.
This is your number-one line of defense against SQL injection attacks.
It's a seriously noob move to not use parameterized queries.
There are tons of them by Microsoft, Oracle, etc. etc.
Other popular open source databases in widespread use are:
IMPORTANT! These instructions assume you haven't already installed PostgreSQL. If you have already installed it, skip this section or Google for how to upgrade your installation.
-
Open a terminal
-
Install PostgreSQL:
brew install postgresql
If you get install errors at this point relating to the link phase failing or missing permissions, look back in the output and see what file it failed to write.
For example, if it's failing to write something in
/usr/local/share/man
-something, try setting the ownership on those directories to yourself.Example (from the command line):
$ sudo chown -R $(whoami) /usr/local/share/man
Then try to install again.
-
Start the database process
-
If you want to start it every time you log in, run:
brew services start postgresql
-
If you want to just start it one time right now, run:
pg_ctl -D /usr/local/var/postgres start
-
-
Create a database named the same as your username:
createdb $(whoami)
- Optionally you can call it anything you want, but the shell defaults to looking for a database named the same as your user.
This database will contain tables.
Then start a shell by running psql
and see if it works. You should see
this prompt:
$ psql
psql (10.1)
Type "help" for help.
dbname=>
(Use psql databasename
if you created the database under something
other than your username.)
Use \l
to get a list of databases.
You can enter \q
to exit the shell.
Reports are that one of the easiest installs is with chocolatey. Might want to try that first.
You can also download a Windows installer from the official site.
Another option is to use the Windows Subsystem for Linux and follow the Ubuntu instructions for installing PostgreSQL.
Arch requires a bit more hands-on, but not much more. Check this out if you want to see a different Unix-y install procedure (or if you run Arch).
Launch the shell on your database, and create a table.
CREATE TABLE Employee (ID INT, FirstName VARCHAR(20), LastName VARCHAR(20));
Insert some records:
INSERT INTO Employee VALUES (1, 'Alpha', 'Alphason');
INSERT INTO Employee VALUES (2, 'Bravo', 'Bravoson');
INSERT INTO Employee VALUES (3, 'Charlie', 'Charleson');
INSERT INTO Employee VALUES (4, 'Delta', 'Deltason');
INSERT INTO Employee VALUES (5, 'Echo', 'Ecoson');
Select all records:
SELECT * FROM Employee;
Select Employee #3's record:
SELECT * FROM Employee WHERE ID=3;
Delete Employee #3's record:
DELETE FROM Employee WHERE ID=3;
Use SELECT
to verify the record is deleted.
Update Employee #2's name to be "Foxtrot Foxtrotson":
UPDATE Employee SET FirstName='Foxtrot', LastName='Foxtrotson' WHERE ID=2;
Use SELECT
to verify the update.
Using Node-Postgres, write a program that creates a table.
Run the following query from your JS code:
CREATE TABLE IF NOT EXISTS Earthquake
(Name VARCHAR(20), Magnitude REAL)
Populate the table with the following data:
let data = [
["Earthquake 1", 2.2],
["Earthquake 2", 7.0],
["Earthquake 3", 1.8],
["Earthquake 4", 5.2],
["Earthquake 5", 2.9],
["Earthquake 6", 0.6],
["Earthquake 7", 6.6]
];
You'll have to run an INSERT
statement for each one.
Open a PostgreSQL shell (psql
) and verify the table exists:
user-> \dt
List of relations
Schema | Name | Type | Owner
--------+------------+-------+-------
public | earthquake | table | user
(1 row)
Also verify it is populated:
user-> SELECT * from Earthquake;
name | magnitude
--------------+-----------
Earthquake 1 | 2.2
Earthquake 2 | 7
Earthquake 3 | 1.8
Earthquake 4 | 5.2
Earthquake 5 | 2.9
Earthquake 6 | 0.6
Earthquake 7 | 6.6
(7 rows)
Hints:
Extra Credit:
- Add an ID column to help normalize the database. Make this column
SERIAL
to auto-increment. - Add Date, Lat, and Lon columns to record more information about the event.
Write a tool that queries the database for earthquakes that are at least a given magnitude.
$ node earthquake 2.9
Earthquakes with magnitudes greater than or equal to 2.9:
Earthquake 2: 7
Earthquake 7: 6.6
Earthquake 4: 5.2
Earthquake 5: 2.9
Use ORDER BY Magnitude DESC
to order the results in descending order
by magnitude.
Use ExpressJS and write a webserver that implements a RESTful API to access the earthquake data.
Endpoints:
/
(GET) Output usage information in HTML.
Example results:
<html>
<body>Usage: [endpoint info]</body>
</html>
/minmag
(GET) Output JSON list of earthquakes that are larger than the
value specified in the mag
parameter. Use form encoding to pass the
data.
Example results:
{
"results": [
{
"name": "Earthquake 2",
"magnitude": 7
},
{
"name": "Earthquake 4",
"magnitude": 5.2
}
]
}
Extra Credit:
/new
(POST) Add a new earthquake to the database. Use form encoding to
pass name
and mag
. Return a JSON status message:
{ "status": "ok" }
or
{ "status": "error", "message": "[error message]" }
/delete
(DELETE) Delete an earthquake from the database. Use form
encoding to pass name
. Return status similar to /new
, above.
1-projects/solutions/
Table of Contents generated with DocToc
- Run
npm install
to install the prereqs. - Run
node maketable
to create the DB tables. - Run
node earthquake 2.9
to see all earthquakes larger than magnitude 2.9.
1-projects/webapi-ii-challenge-master/
Table of Contents generated with DocToc
- Express Routing
- Reading Request data from body and URL parameters
- Sub-routes
- API design and development.
Use Node.js
and Express
to build an API that performs CRUD operations on blog posts
.
- Fork and Clone this repository.
- CD into the folder where you cloned the repository.
- Type
npm install
to download all dependencies. - To start the server, type
npm run server
from the root folder (where the package.json file is). The server is configured to restart automatically as you make changes.
The data
folder contains a database populated with test posts
.
Database access will be done using the db.js
file included inside the data
folder.
The db.js
publishes the following methods:
-
find()
: calling find returns a promise that resolves to an array of all theposts
contained in the database. -
findById()
: this method expects anid
as it's only parameter and returns the post corresponding to theid
provided or an empty array if no post with thatid
is found. -
insert()
: calling insert passing it apost
object will add it to the database and return an object with theid
of the inserted post. The object looks like this:{ id: 123 }
. -
update()
: accepts two arguments, the first is theid
of the post to update and the second is an object with thechanges
to apply. It returns the count of updated records. If the count is 1 it means the record was updated correctly. -
remove()
: the remove method accepts anid
as its first parameter and upon successfully deleting the post from the database it returns the number of records deleted. -
findPostComments()
: the findPostComments accepts apostId
as its first parameter and returns all comments on the post associated with the post id. -
findCommentById()
: accepts anid
and returns the comment associated with that id. -
insertComment()
: calling insertComment while passing it acomment
object will add it to the database and return an object with theid
of the inserted comment. The object looks like this:{ id: 123 }
. This method will throw an error if thepost_id
field in thecomment
object does not match a valid post id in the database.
Now that we have a way to add, update, remove and retrieve data from the provided database, it is time to work on the API.
A Blog Post in the database has the following structure:
{
title: "The post title", // String, required
contents: "The post contents", // String, required
created_at: Mon Aug 14 2017 12:50:16 GMT-0700 (PDT) // Date, defaults to current date
updated_at: Mon Aug 14 2017 12:50:16 GMT-0700 (PDT) // Date, defaults to current date
}
A Comment in the database has the following structure:
{
text: "The text of the comment", // String, required
post_id: "The id of the associated post", // Integer, required, must match the id of a post entry in the database
created_at: Mon Aug 14 2017 12:50:16 GMT-0700 (PDT) // Date, defaults to current date
updated_at: Mon Aug 14 2017 12:50:16 GMT-0700 (PDT) // Date, defaults to current date
}
- Add the code necessary to implement the endpoints listed below.
- Separate the endpoints that begin with
/api/posts
into a separateExpress Router
.
Configure the API to handle to the following routes:
Method | Endpoint | Description |
---|---|---|
POST | /api/posts | Creates a post using the information sent inside the request body . |
POST | /api/posts/:id/comments | Creates a comment for the post with the specified id using information sent inside of the request body . |
GET | /api/posts | Returns an array of all the post objects contained in the database. |
GET | /api/posts/:id | Returns the post object with the specified id. |
GET | /api/posts/:id/comments | Returns an array of all the comment objects associated with the post with the specified id. |
DELETE | /api/posts/:id | Removes the post with the specified id and returns the deleted post object. You may need to make additional calls to the database in order to satisfy this requirement. |
PUT | /api/posts/:id | Updates the post with the specified id using data from the request body . Returns the modified document, NOT the original. |
When the client makes a POST
request to /api/posts
:
-
If the request body is missing the
title
orcontents
property:- cancel the request.
- respond with HTTP status code
400
(Bad Request). - return the following JSON response:
{ errorMessage: "Please provide title and contents for the post." }
.
-
If the information about the post is valid:
- save the new post the the database.
- return HTTP status code
201
(Created). - return the newly created post.
-
If there's an error while saving the post:
- cancel the request.
- respond with HTTP status code
500
(Server Error). - return the following JSON object:
{ error: "There was an error while saving the post to the database" }
.
When the client makes a POST
request to /api/posts/:id/comments
:
-
If the post with the specified
id
is not found:- return HTTP status code
404
(Not Found). - return the following JSON object:
{ message: "The post with the specified ID does not exist." }
.
- return HTTP status code
-
If the request body is missing the
text
property:- cancel the request.
- respond with HTTP status code
400
(Bad Request). - return the following JSON response:
{ errorMessage: "Please provide text for the comment." }
.
-
If the information about the comment is valid:
- save the new comment the the database.
- return HTTP status code
201
(Created). - return the newly created comment.
-
If there's an error while saving the comment:
- cancel the request.
- respond with HTTP status code
500
(Server Error). - return the following JSON object:
{ error: "There was an error while saving the comment to the database" }
.
When the client makes a GET
request to /api/posts
:
- If there's an error in retrieving the posts from the database:
- cancel the request.
- respond with HTTP status code
500
. - return the following JSON object:
{ error: "The posts information could not be retrieved." }
.
When the client makes a GET
request to /api/posts/:id
:
-
If the post with the specified
id
is not found:- return HTTP status code
404
(Not Found). - return the following JSON object:
{ message: "The post with the specified ID does not exist." }
.
- return HTTP status code
-
If there's an error in retrieving the post from the database:
- cancel the request.
- respond with HTTP status code
500
. - return the following JSON object:
{ error: "The post information could not be retrieved." }
.
When the client makes a GET
request to /api/posts/:id/comments
:
-
If the post with the specified
id
is not found:- return HTTP status code
404
(Not Found). - return the following JSON object:
{ message: "The post with the specified ID does not exist." }
.
- return HTTP status code
-
If there's an error in retrieving the comments from the database:
- cancel the request.
- respond with HTTP status code
500
. - return the following JSON object:
{ error: "The comments information could not be retrieved." }
.
When the client makes a DELETE
request to /api/posts/:id
:
-
If the post with the specified
id
is not found:- return HTTP status code
404
(Not Found). - return the following JSON object:
{ message: "The post with the specified ID does not exist." }
.
- return HTTP status code
-
If there's an error in removing the post from the database:
- cancel the request.
- respond with HTTP status code
500
. - return the following JSON object:
{ error: "The post could not be removed" }
.
When the client makes a PUT
request to /api/posts/:id
:
-
If the post with the specified
id
is not found:- return HTTP status code
404
(Not Found). - return the following JSON object:
{ message: "The post with the specified ID does not exist." }
.
- return HTTP status code
-
If the request body is missing the
title
orcontents
property:- cancel the request.
- respond with HTTP status code
400
(Bad Request). - return the following JSON response:
{ errorMessage: "Please provide title and contents for the post." }
.
-
If there's an error when updating the post:
- cancel the request.
- respond with HTTP status code
500
. - return the following JSON object:
{ error: "The post information could not be modified." }
.
-
If the post is found and the new information is valid:
- update the post document in the database using the new information sent in the
request body
. - return HTTP status code
200
(OK). - return the newly updated post.
- update the post document in the database using the new information sent in the
To work on the stretch problems you'll need to enable the cors
middleware. Follow these steps:
- add the
cors
npm module:npm i cors
. - add
server.use(cors())
afterserver.use(express.json())
.
Create a new React application and connect it to your server:
- Use
create-react-app
to create an application inside the root folder, name itclient
. - From the React application connect to the
/api/posts
endpoint in the API and show the list of posts. - Style the list of posts however you see fit.
2-resources/__CHEAT-SHEETS/All/
Table of Contents generated with DocToc
title: 101 category: JavaScript libraries layout: 2017/sheet updated: 2017-09-21 intro: | 101 is a JavaScript library for dealing with immutable data in a functional manner.
const isObject = require('101/isObject')
isObject({}) // → true
Every function is exposed as a module.
See: 101
isObject({})
isString('str')
isRegExp(/regexp/)
isBoolean(true)
isEmpty({})
isfunction(x => x)
isInteger(10)
isNumber(10.1)
instanceOf(obj, 'string')
{: .-three-column}
{: .-prime}
let obj = {}
obj = put(obj, 'user.name', 'John')
// → { user: { name: 'John' } }
pluck(name, 'user.name')
// → 'John'
obj = del(obj, 'user')
// → { }
pluck(state, 'user.profile.name')
pick(state, ['user', 'ui'])
pick(state, /^_/)
pluck
returns values, pick
returns subsets of objects.
put(state, 'user.profile.name', 'john')
See: put
del(state, 'user.profile')
omit(state, ['user', 'data'])
omit
is like del
, but supports multiple keys to be deleted.
hasKeypaths(state, ['user'])
hasKeypaths(state, { 'user.profile.name': 'john' })
See: hasKeypaths
values(state)
| and(x, y)
| x && y
|
| or(x, y)
| x || y
|
| xor(x, y)
| !(!x && !y) && !(x && y)
|
| equals(x, y)
| x === y
|
| exists(x)
| !!x
|
| not(x)
| !x
|
Useful for function composition.
compose(f, g) // x => f(g(x))
curry(f) // x => y => f(x, y)
flip(f) // f(x, y) --> f(y, x)
passAll(f, g) // x => f(x) && g(x)
passAny(f, g) // x => f(x) || g(x)
converge(and, [pluck('a'), pluck('b')])(x)
// → and(pluck(x, 'a'), pluck(x, 'b'))
See: converge
find(list, x => x.y === 2)
findIndex(list, x => ...)
includes(list, 'item')
last(list)
find(list, hasProps('id'))
groupBy(list, 'id')
indexBy(list, 'id')
isFloat = passAll(isNumber, compose(isInteger, not))
// n => isNumber(n) && not(isInteger(n))
function doStuff (object, options) { ... }
doStuffForce = curry(flip(doStuff))({ force: true })
Table of Contents generated with DocToc
title: Absinthe category: Hidden layout: 2017/sheet tags: [WIP] updated: 2017-10-10 intro: | Absinthe allows you to write GraphQL servers in Elixir.
-
Schema
- The root. Defines what queries you can do, and what types they return. -
Resolver
- Functions that return data. -
Type
- A type definition describing the shape of the data you'll return.
defmodule Blog.Web.Router do
use Phoenix.Router
forward "/", Absinthe.Plug,
schema: Blog.Schema
end
{: data-line="4,5"}
Absinthe is a Plug, and you pass it one Schema.
See: Our first query
{: .-three-column}
defmodule Blog.Schema do
use Absinthe.Schema
import_types Blog.Schema.Types
query do
@desc "Get a list of blog posts"
field :posts, list_of(:post) do
resolve &Blog.PostResolver.all/2
end
end
end
{: data-line="5,6,7,8,9,10"}
This schema will account for { posts { ··· } }
. It returns a Type of :post
, and delegates to a Resolver.
defmodule Blog.PostResolver do
def all(_args, _info) do
{:ok, Blog.Repo.all(Blog.Post)}
end
end
{: data-line="3"}
This is the function that the schema delegated the posts
query to.
defmodule Blog.Schema.Types do
use Absinthe.Schema.Notation
@desc "A blog post"
object :post do
field :id, :id
field :title, :string
field :body, :string
end
end
{: data-line="4,5,6,7,8,9"}
This defines a type :post
, which is used by the resolver.
{ user(id: "1") { ··· } }
query do
field :user, type: :user do
arg :id, non_null(:id)
resolve &Blog.UserResolver.find/2
end
end
{: data-line="3"}
def find(%{id: id} = args, _info) do
···
end
{: data-line="1"}
See: Query arguments
{
mutation CreatePost {
post(title: "Hello") { id }
}
}
mutation do
@desc "Create a post"
field :post, type: :post do
arg :title, non_null(:string)
resolve &Blog.PostResolver.create/2
end
end
{: data-line="1"}
See: Mutations
- Absinthe website (absinthe-graphql.org)
- GraphQL cheatsheet (devhints.io)
Table of Contents generated with DocToc
Allows you to filter listings by a certain scope. {: .-setup}
scope :draft
scope :for_approval
scope :public, if: ->{ current_admin_user.can?(...) }
scope "Unapproved", :pending
scope("Published") { |books| books.where(:published: true) }
filter :email
filter :username
You can define custom actions for models. {: .-setup}
before_filter only: [:show, :edit, :publish] do
@post = Post.find(params[:id])
end
member_action :publish, method: :put do
@post.publish!
redirect_to admin_posts_path, notice: "The post '#{@post}' has been published!"
end
index do
column do |post|
link_to 'Publish', publish_admin_post_path(post), method: :put
end
end
action_item only: [:edit, :show] do
@post = Post.find(params[:id])
link_to 'Publish', publish_admin_post_path(post), method: :put
end
column :foo
column :title, sortable: :name do |post|
strong post.title
end
status_tag "Done" # Gray
status_tag "Finished", :ok # Green
status_tag "You", :warn # Orange
status_tag "Failed", :error # Red
ActiveAdmin.register Post do
actions :index, :edit
# or: config.clear_action_items!
end
Table of Contents generated with DocToc
title: adb (Android Debug Bridge) category: CLI layout: 2017/sheet weight: -1 authors:
- github: ZackNeyland updated: 2018-03-06
Command | Description |
---|---|
adb devices |
Lists connected devices |
adb devices -l |
Lists connected devices and kind |
--- | --- |
adb root |
Restarts adbd with root permissions |
adb start-server |
Starts the adb server |
adb kill-server |
Kills the adb server |
adb remount |
Remounts file system with read/write access |
adb reboot |
Reboots the device |
adb reboot bootloader |
Reboots the device into fastboot |
adb disable-verity |
Reboots the device into fastboot |
wait-for-device
can be specified after adb
to ensure that the command will run once the device is connected.
-s
can be used to send the commands to a specific device when multiple are connected.
$ adb wait-for-device devices
List of devices attached
somedevice-1234 device
someotherdevice-1234 device
$ adb -s somedevice-1234 root
Command | Description |
---|---|
adb logcat |
Starts printing log messages to stdout |
adb logcat -g |
Displays current log buffer sizes |
adb logcat -G <size> |
Sets the buffer size (K or M) |
adb logcat -c |
Clears the log buffers |
adb logcat *:V |
Enables ALL log messages (verbose) |
adb logcat -f <filename> |
Dumps to specified file |
$ adb logcat -G 16M
$ adb logcat *:V > output.log
Command | Description |
---|---|
adb push <local> <remote> |
Copies the local to the device at remote |
adb pull <remote> <local> |
Copies the remote from the device to local |
$ echo "This is a test" > test.txt
$ adb push test.txt /sdcard/test.txt
$ adb pull /sdcard/test.txt pulledTest.txt
Command | Description |
---|---|
adb shell <command> |
Runs the specified command on device (most unix commands work here) |
adb shell wm size |
Displays the current screen resolution |
adb shell wm size WxH |
Sets the resolution to WxH |
adb shell pm list packages |
Lists all installed packages |
adb shell pm list packages -3 |
Lists all installed 3rd-party packages |
adb shell monkey -p app.package.name |
Starts the specified package |
Table of Contents generated with DocToc
title: Google Analytics's analytics.js category: Analytics layout: 2017/sheet updated: 2017-10-29 intro: | Google Analytics's analytics.js is deprecated.
ga('create', 'UA-XXXX-Y', 'auto')
ga('create', 'UA-XXXX-Y', { userId: 'USER_ID' })
ga('send', 'pageview')
ga('send', 'pageview', { 'dimension15': 'My custom dimension' })
ga('send', 'event', 'button', 'click', {color: 'red'});
ga('send', 'event', 'button', 'click', 'nav buttons', 4);
/* ^category ^action ^label ^value */
ga('send', 'exception', {
exDescription: 'DatabaseError',
exFatal: false,
appName: 'myapp',
appVersion: '0.1.2'
})
Table of Contents generated with DocToc
mixpanel.identify('284');
mixpanel.people.set({ $email: 'hi@gmail.com' });
mixpanel.register({ age: 28, gender: 'male' }); /* set common properties */
mixpanel {: .-crosslink}
ga('create', 'UA-XXXX-Y', 'auto');
ga('create', 'UA-XXXX-Y', { userId: 'USER_ID' });
ga('send', 'pageview');
ga('send', 'pageview', { 'dimension15': 'My custom dimension' });
analytics.js {: .-crosslink}
Table of Contents generated with DocToc
- Lists (ng-repeat)
- Model (ng-model)
- Defining a module
- Controller with protection from minification
- Service
- Directive
- HTTP
<html ng-app="nameApp">
<ul ng-controller="MyListCtrl">
<li ng-repeat="phone in phones">
{{phone.name}}
</li>
</ul>
<select ng-model="orderProp">
<option value="name">Alphabetical</option>
<option value="age">Newest</option>
</select>
App = angular.module('myApp', []);
App.controller('MyListCtrl', function ($scope) {
$scope.phones = [ ... ];
});
App.controller('Name', [
'$scope',
'$http',
function ($scope, $http) {
}
]);
a.c 'name', [
'$scope'
'$http'
($scope, $http) ->
]
App.service('NameService', function($http){
return {
get: function(){
return $http.get(url);
}
}
});
In controller you call with parameter and will use promises to return data from server.
App.controller('controllerName',
function(NameService){
NameService.get()
.then(function(){})
})
App.directive('name', function(){
return {
template: '<h1>Hello</h1>'
}
});
In HTML will use <name></name>
to render your template <h1>Hello</h1>
App.controller('PhoneListCtrl', function ($scope, $http) {
$http.get('/data.json').success(function (data) {
$scope.phones = data;
})
});
References:
Table of Contents generated with DocToc
{: .-one-column}
mkdir -p gif
mplayer -ao null -vo gif89a:outdir=gif $INPUT
mogrify -format gif *.png
gifsicle --colors=256 --delay=4 --loopcount=0 --dither -O3 gif/*.gif > ${INPUT%.*}.gif
rm -rf gif
You'll need mplayer
, imagemagick
and gifsicle
. This converts frames to .png, then turns them into an animated gif.
mplayer -ao null -ss 0:02:06 -endpos 0:05:00 -vo gif89a:outdir=gif videofile.mp4
See -ss
and -endpos
.
Table of Contents generated with DocToc
\033[#m
0 clear
1 bold
4 underline
5 blink
30-37 fg color
40-47 bg color
1K clear line (to beginning of line)
2K clear line (entire line)
2J clear screen
0;0H move cursor to 0;0
1A move up 1 line
0 black
1 red
2 green
3 yellow
4 blue
5 magenta
6 cyan
7 white
hide_cursor() { printf "\e[?25l"; }
show_cursor() { printf "\e[?25h"; }
Table of Contents generated with DocToc
- Ruby installation (github.com)
- Postgres installation (github.com)
- GitLab installation (github.com)
Table of Contents generated with DocToc
title: "Ansible quickstart" category: Ansible layout: 2017/sheet description: | A quick guide to getting started with your first Ansible playbook.
$ brew install ansible # OSX
$ [sudo] apt install ansible # elsewhere
Ansible is available as a package in most OS's.
See: Installation
~$ mkdir setup
~$ cd setup
Make a folder for your Ansible files.
See: Getting started
[sites]
127.0.0.1
192.168.0.1
192.168.0.2
192.168.0.3
This is a list of hosts you want to manage, grouped into groups. (Hint: try
using localhost ansible_connection=local
to deploy to your local machine.)
See: Intro to Inventory
- hosts: 127.0.0.1
user: root
tasks:
- name: install nginx
apt: pkg=nginx state=present
- name: start nginx every bootup
service: name=nginx state=started enabled=yes
- name: do something in the shell
shell: echo hello > /tmp/abc.txt
- name: install bundler
gem: name=bundler state=latest
See: Intro to Playbooks
~/setup$ ls
hosts
playbook.yml
~/setup$ ansible-playbook -i hosts playbook.yml
PLAY [all] ********************************************************************
GATHERING FACTS ***************************************************************
ok: [127.0.0.1]
TASK: [install nginx] *********************************************************
ok: [127.0.0.1]
TASK: start nginx every bootup] ***********************************************
ok: [127.0.0.1]
...
- Getting started with Ansible (lowendbox.com)
- Getting started (docs.ansible.com)
- Intro to Inventory (docs.ansible.com)
- Intro to Playbooks (docs.ansible.com)
Table of Contents generated with DocToc
title: Ansible modules category: Ansible layout: 2017/sheet prism_languages: [yaml] updated: 2017-10-03
{% raw %}
---
- hosts: production
remote_user: root
tasks:
- ···
Place your modules inside tasks
.
- apt: pkg=vim state=present
- apt:
pkg: vim
state: present
- apt: >
pkg=vim
state=present
Define your tasks in any of these formats. One-line format is preferred for short declarations, while maps are preferred for longer.
- apt:
pkg: nodejs
state: present # absent | latest
update_cache: yes
force: no
- apt:
deb: "https://packages.erlang-solutions.com/erlang-solutions_1.0_all.deb"
- apt_repository:
repo: "deb https://··· raring main"
state: present
- apt_key:
id: AC40B2F7
url: "http://···"
state: present
- git:
repo: git://github.com/
dest: /srv/checkout
version: master
depth: 10
bare: yes
See: git module
- git_config:
name: user.email
scope: global # local | system
value: hi@example.com
See: git_config module
- user:
state: present
name: git
system: yes
shell: /bin/sh
groups: admin
comment: "Git Version Control"
See: user module
- service:
name: nginx
state: started
enabled: yes # optional
See: service module
- shell: apt-get install nginx -y
- shell: echo hello
args:
creates: /path/file # skip if this exists
removes: /path/file # skip if this is missing
chdir: /path # cd here before running
- shell: |
echo "hello there"
echo "multiple lines"
See: shell module
- script: /x/y/script.sh
args:
creates: /path/file # skip if this exists
removes: /path/file # skip if this is missing
chdir: /path # cd here before running
See: script module
- file:
path: /etc/dir
state: directory # file | link | hard | touch | absent
# Optional:
owner: bin
group: wheel
mode: 0644
recurse: yes # mkdir -p
force: yes # ln -nfs
See: file module
- copy:
src: /app/config/nginx.conf
dest: /etc/nginx/nginx.conf
# Optional:
owner: user
group: user
mode: 0644
backup: yes
See: copy module
- template:
src: config/redis.j2
dest: /etc/redis.conf
# Optional:
owner: user
group: user
mode: 0644
backup: yes
See: template module
- name: do something locally
local_action: shell echo hello
- debug:
msg: "Hello {{ var }}"
See: debug module {% endraw %}
Table of Contents generated with DocToc
roles/
common/
tasks/
handlers/
files/ # 'copy' will refer to this
templates/ # 'template' will refer to this
meta/ # Role dependencies here
vars/
defaults/
main.yml
Table of Contents generated with DocToc
{% raw %}
$ sudo mkdir /etc/ansible
$ sudo vim /etc/ansible/hosts
[example]
192.0.2.101
192.0.2.102
$ ansible-playbook playbook.yml
- hosts: all
user: root
sudo: no
vars:
aaa: bbb
tasks:
- ...
handlers:
- ...
tasks:
- include: db.yml
handlers:
- include: db.yml user=timmy
handlers:
- name: start apache2
action: service name=apache2 state=started
tasks:
- name: install apache
action: apt pkg=apache2 state=latest
notify:
- start apache2
- host: lol
vars_files:
- vars.yml
vars:
project_root: /etc/xyz
tasks:
- name: Create the SSH directory.
file: state=directory path=${project_root}/home/.ssh/
only_if: "$vm == 0"
- host: xxx
roles:
- db
- { role:ruby, sudo_user:$user }
- web
# Uses:
# roles/db/tasks/*.yml
# roles/db/handlers/*.yml
- name: my task
command: ...
register: result
failed_when: "'FAILED' in result.stderr"
ignore_errors: yes
changed_when: "result.rc != 2"
vars:
local_home: "{{ lookup('env','HOME') }}"
{% endraw %}
Table of Contents generated with DocToc
CACHE MANIFEST
# version
CACHE:
http://www.google.com/jsapi
/assets/app.js
/assets/bg.png
NETWORK:
*
Note that Appcache is deprecated!
See: Using the application cache (developer.mozilla.org)
Table of Contents generated with DocToc
title: AppleScript updated: 2018-12-06 layout: 2017/sheet category: macOS prism_languages: [applescript]
osascript -e "..."
display notification "X" with title "Y"
-- This is a single line comment
# This is another single line comment
(*
This is
a multi
line comment
*)
-- default voice
say "Hi I am a Mac"
-- specified voice
say "Hi I am a Mac" using "Zarvox"
-- beep once
beep
-- beep 10 times
beep 10
-- delay for 5 seconds
delay 5
Table of Contents generated with DocToc
<meta property="al:ios:url" content="applinks://docs" />
<meta property="al:ios:app_store_id" content="12345" />
<meta property="al:ios:app_name" content="App Links" />
<meta property="al:android:url" content="applinks://docs" />
<meta property="al:android:app_name" content="App Links" />
<meta property="al:android:package" content="org.applinks" />
<meta property="al:web:url" content="http://applinks.org/documentation" />
ios
ipad
iphone
android
windows_phone
web
Table of Contents generated with DocToc
- Tables
- Fields
where
(restriction)select
(projection)-
join
limit
/offset
- Aggregates
order
- With ActiveRecord
- Clean code with arel
- Reference
users = Arel::Table.new(:users)
users = User.arel_table # ActiveRecord model
users[:name]
users[:id]
users.where(users[:name].eq('amy'))
# SELECT * FROM users WHERE users.name = 'amy'
users.project(users[:id])
# SELECT users.id FROM users
In ActiveRecord (without Arel), if :photos
is the name of the association, use joins
users.joins(:photos)
In Arel, if photos
is defined as the Arel table,
photos = Photo.arel_table
users.join(photos)
users.join(photos, Arel::Nodes::OuterJoin).on(users[:id].eq(photos[:user_id]))
users.joins(:photos).merge(Photo.where(published: true))
If the simpler version doesn't help and you want to add more SQL statements to it:
users.join(
users.join(photos, Arel::Nodes::OuterJoin)
.on(photos[:user_id].eq(users[:id]).and(photos[:published].eq(true)))
)
multiple joins
with the same table but different meanings and/or conditions
creators = User.arel_table.alias('creators')
updaters = User.arel_table.alias('updaters')
photos = Photo.arel_table
photos_with_credits = photos
.join(photos.join(creators, Arel::Nodes::OuterJoin).on(photos[:created_by_id].eq(creators[:id])))
.join(photos.join(updaters, Arel::Nodes::OuterJoin).on(photos[:assigned_id].eq(updaters[:id])))
.project(photos[:name], photos[:created_at], creators[:name].as('creator'), updaters[:name].as('editor'))
photos_with_credits.to_sql
# => "SELECT `photos`.`name`, `photos`.`created_at`, `creators`.`name` AS creator, `updaters`.`name` AS editor FROM `photos` INNER JOIN (SELECT FROM `photos` LEFT OUTER JOIN `users` `creators` ON `photos`.`created_by_id` = `creators`.`id`) INNER JOIN (SELECT FROM `photos` LEFT OUTER JOIN `users` `updaters` ON `photos`.`updated_by_id` = `updaters`.`id`)"
# after the request is done, you can use the attributes you named
# it's as if every Photo record you got has "creator" and "editor" fields, containing creator name and editor name
photos_with_credits.map{|x|
"#{photo.name} - copyright #{photo.created_at.year} #{photo.creator}, edited by #{photo.editor}"
}.join('; ')
users.take(5) # => SELECT * FROM users LIMIT 5
users.skip(4) # => SELECT * FROM users OFFSET 4
users.project(users[:age].sum) # .average .minimum .maximum
users.project(users[:id].count)
users.project(users[:id].count.as('user_count'))
users.order(users[:name])
users.order(users[:name], users[:age].desc)
users.reorder(users[:age])
User.arel_table
User.where(id: 1).arel
Most of the clever stuff should be in scopes, e.g. the code above could become:
photos_with_credits = Photo.with_creator.with_editor
You can store requests in variables then add SQL segments:
all_time = photos_with_credits.count
this_month = photos_with_credits.where(photos[:created_at].gteq(Date.today.beginning_of_month))
recent_photos = photos_with_credits.where(photos[:created_at].gteq(Date.today.beginning_of_month)).limit(5)
Table of Contents generated with DocToc
{: .-three-column}
Shortcut | Description |
---|---|
⌘\ |
Toggle tree |
⌘⇧\ |
Reveal current file |
{: .-shortcuts} |
Shortcut | Description |
---|---|
⌘/ |
Toggle comments |
{: .-shortcuts} |
Shortcut | Description |
---|---|
⌘k ←
|
Split pane to the left |
--- | --- |
⌘⌥= |
Grow pane |
⌘⌥- |
Shrink pane |
--- | --- |
^⇧← |
Move tab to left |
{: .-shortcuts} |
Shortcut | Description |
---|---|
^m |
Go to matching bracket |
^] |
Remove brackets from selection |
^⌘m |
Select inside brackets |
⌥⌘. |
Close tag |
{: .-shortcuts} |
Shortcut | Description |
---|---|
^⌥↓ |
Jump to declaration under cursor |
^⇧r |
Show tags |
{: .-shortcuts} |
Symbols view enables Ctags support for Atom.
See: Symbols view
| ^⇧9
| Show Git pane |
| ^⇧8
| Show GitHub pane |
{: .-shortcuts}
Shortcut | Description |
---|---|
⌘d |
Select word |
⌘l |
Select line |
--- | --- |
⌘↓ |
Move line down |
⌘↑ |
Move line up |
--- | --- |
⌘⏎ |
New line below |
⌘⇧⏎ |
New line above |
--- | --- |
⌘⇧k |
Delete line |
⌘⇧d |
Duplicate line |
{: .-shortcuts} |
Shortcut | Description |
---|---|
⌘⇧p |
Command palette |
⌘⇧a |
Add project folder |
--- | --- |
⌘n |
New file |
⌘⇧n |
New window |
--- | --- |
⌘f |
Find in file |
⌘⇧f |
Find in project |
⌘t |
Search files in project |
{: .-shortcuts} |
- For Windows and Linux,
⌘
is theControl
key. - For macOS, it's the
Command
key.
- For Windows and Linux,
⌥
is theAlt
key. - For macOS, it's the
Option
key.
Table of Contents generated with DocToc
Create action creators in flux standard action format. {: .-setup}
increment = createAction('INCREMENT', amount => amount)
increment = createAction('INCREMENT') // same
increment(42) === { type: 'INCREMENT', payload: 42 }
// Errors are handled for you:
err = new Error()
increment(err) === { type: 'INCREMENT', payload: err, error: true }
redux-actions {: .-crosslink}
A standard for flux action objects. An action may have an error
, payload
and meta
and nothing else.
{: .-setup}
{ type: 'ADD_TODO', payload: { text: 'Work it' } }
{ type: 'ADD_TODO', payload: new Error(), error: true }
flux-standard-action {: .-crosslink}
Dispatch multiple actions in one action creator. {: .-setup}
store.dispatch([
{ type: 'INCREMENT', payload: 2 },
{ type: 'INCREMENT', payload: 3 }
])
redux-multi {: .-crosslink}
Combines reducers (like combineReducers()), but without namespacing magic. {: .-setup}
re = reduceReducers(
(state, action) => state + action.number,
(state, action) => state + action.number
)
re(10, { number: 2 }) //=> 14
reduce-reducers {: .-crosslink}
Logs actions to your console. {: .-setup}
// Nothing to see here
redux-logger {: .-crosslink}
Pass promises to actions. Dispatches a flux-standard-action. {: .-setup}
increment = createAction('INCREMENT') // redux-actions
increment(Promise.resolve(42))
redux-promise {: .-crosslink}
Sorta like that, too. Works by letting you pass thunks (functions) to dispatch()
. Also has 'idle checking'.
{: .-setup}
fetchData = (url) => (dispatch) => {
dispatch({ type: 'FETCH_REQUEST' })
fetch(url)
.then((data) => dispatch({ type: 'FETCH_DONE', data })
.catch((error) => dispatch({ type: 'FETCH_ERROR', error })
})
store.dispatch(fetchData('/posts'))
// That's actually shorthand for:
fetchData('/posts')(store.dispatch)
redux-promises {: .-crosslink}
Pass side effects declaratively to keep your actions pure. {: .-setup}
{
type: 'EFFECT_COMPOSE',
payload: {
type: 'FETCH'
payload: {url: '/some/thing', method: 'GET'}
},
meta: {
steps: [ [success, failure] ]
}
}
redux-effects {: .-crosslink}
Pass "thunks" to as actions. Extremely similar to redux-promises, but has support for getState. {: .-setup}
fetchData = (url) => (dispatch, getState) => {
dispatch({ type: 'FETCH_REQUEST' })
fetch(url)
.then((data) => dispatch({ type: 'FETCH_DONE', data })
.catch((error) => dispatch({ type: 'FETCH_ERROR', error })
})
store.dispatch(fetchData('/posts'))
// That's actually shorthand for:
fetchData('/posts')(store.dispatch, store.getState)
// Optional: since fetchData returns a promise, it can be chained
// for server-side rendering
store.dispatch(fetchPosts()).then(() => {
ReactDOMServer.renderToString(<MyApp store={store} />)
})
redux-thunk {: .-crosslink}
Table of Contents generated with DocToc
aws ec2 describe-instances
aws ec2 start-instances --instance-ids i-12345678c
aws ec2 terminate-instances --instance-ids i-12345678c
aws s3 ls s3://mybucket
aws s3 rm s3://mybucket/folder --recursive
aws s3 cp myfolder s3://mybucket/folder --recursive
aws s3 sync myfolder s3://mybucket/folder --exclude *.tmp
aws ecs create-cluster
--cluster-name=NAME
--generate-cli-skeleton
aws ecs create-service
brew install awscli
aws configure
aws configure --profile project1
aws configure --profile project2
- .elasticbeanstalk/config.yml - application config
- .elasticbeanstalk/dev-env.env.yml - environment config
eb config
See: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html
- http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers.html
- http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
Table of Contents generated with DocToc
.on('event', callback)
.on('event', callback, context)
.on({
'event1': callback,
'event2': callback
})
.on('all', callback)
.once('event', callback) // Only happens once
object.off('change', onChange) // just the `onChange` callback
object.off('change') // all 'change' callbacks
object.off(null, onChange) // `onChange` callback for all events
object.off(null, null, context) // all callbacks for `context` all events
object.off() // all
object.trigger('event')
view.listenTo(object, event, callback)
view.stopListening()
-
Collection:
-
add
(model, collection, options) -
remove
(model, collection, options) -
reset
(collection, options) -
sort
(collection, options)
-
-
Model:
-
change
(model, options) -
change:[attr]
(model, value, options) -
destroy
(model, collection, options) -
error
(model, xhr, options)
-
-
Model and collection:
-
request
(model, xhr, options) -
sync
(model, resp, options)
-
-
Router:
-
route:[name]
(params) -
route
(router, route, params)
-
// All attributes are optional
var View = Backbone.View.extend({
model: doc,
tagName: 'div',
className: 'document-item',
id: "document-" + doc.id,
attributes: { href: '#' },
el: 'body',
events: {
'click button.save': 'save',
'click .cancel': function() { ··· },
'click': 'onclick'
},
constructor: function() { ··· },
render: function() { ··· }
})
view = new View()
view = new View({ el: ··· })
view.$el.show()
view.$('input')
view.remove()
view.delegateEvents()
view.undelegateEvents()
// All attributes are optional
var Model = Backbone.Model.extend({
defaults: {
'author': 'unknown'
},
idAttribute: '_id',
parse: function() { ··· }
})
var obj = new Model({ title: 'Lolita', author: 'Nabokov' })
var obj = new Model({ collection: ··· })
obj.id
obj.cid // → 'c38' (client-side ID)
obj.clone()
obj.hasChanged('title')
obj.changedAttributes() // false, or hash
obj.previousAttributes() // false, or hash
obj.previous('title')
obj.isNew()
obj.set({ title: 'A Study in Pink' })
obj.set({ title: 'A Study in Pink' }, { validate: true, silent: true })
obj.unset('title')
obj.get('title')
obj.has('title')
obj.escape('title') /* Like .get() but HTML-escaped */
obj.clear()
obj.clear({ silent: true })
obj.save()
obj.save({ attributes })
obj.save(null, {
silent: true, patch: true, wait: true,
success: callback, error: callback
})
obj.destroy()
obj.destroy({
wait: true,
success: callback, error: callback
})
obj.toJSON()
obj.fetch()
obj.fetch({ success: callback, error: callback })
var Model = Backbone.Model.extend({
validate: function(attrs, options) {
if (attrs.end < attrs.start) {
return "Can't end before it starts"
}
}
})
{: data-line="2"}
obj.validationError //=> "Can't end before it starts"
obj.isValid()
obj.on('invalid', function (model, error) { ··· })
// Triggered on:
obj.save()
obj.set({ ··· }, { validate: true })
var Model = Backbone.Model.extend({
// Single URL (string or function)
url: '/account',
url: function() { return '/account' },
// Both of these two work the same way
url: function() { return '/books/' + this.id }),
urlRoot: '/books'
})
var obj = new Model({ url: ··· })
var obj = new Model({ urlRoot: ··· })
{: .-one-column}
- Backbone website (backbonejs.org)
- Backbone patterns (ricostacruz.com)
Table of Contents generated with DocToc
Here are some badges for open source projects.
Travis
[](https://travis-ci.org/rstacruz/REPO)
CodeClimate (shields.io)
[](https://codeclimate.com/github/rstacruz/REPO
"CodeClimate")
Coveralls (shields.io)
[](https://coveralls.io/r/rstacruz/REPO)
Travis (shields.io)
[](https://travis-ci.org/rstacruz/REPO "See test builds")
NPM (shields.io)
[](https://npmjs.org/package/REPO "View this project on npm")
Ruby gem (shields.io)
[](http://rubygems.org/gems/GEMNAME "View this project in Rubygems")
Gitter chat
[](https://gitter.im/REPO/GITTERROOM "Gitter chat")
Gitter chat (shields.io)
[]( https://gitter.im/USER/REPO )
david-dm
[](https://david-dm.org/rstacruz/REPO)
[](http://opensource.org/licenses/MIT)
[](http://opensource.org/licenses/MIT)
Support
-------
__Bugs and requests__: submit them through the project's issues tracker.<br>
[]( https://github.com/USER/REPO/issues )
__Questions__: ask them at StackOverflow with the tag *REPO*.<br>
[]( http://stackoverflow.com/questions/tagged/REPO )
__Chat__: join us at gitter.im.<br>
[]( https://gitter.im/USER/REPO )
Installation
------------
Add [nprogress.js] and [nprogress.css] to your project.
```html
<script src='nprogress.js'></script>
<link rel='stylesheet' href='nprogress.css'/>
```
NProgress is available via [bower] and [npm].
$ bower install --save nprogress
$ npm install --save nprogress
[bower]: http://bower.io/search/?q=nprogress
[npm]: https://www.npmjs.org/package/nprogress
**PROJECTNAME** © 2014+, Rico Sta. Cruz. Released under the [MIT] License.<br>
Authored and maintained by Rico Sta. Cruz with help from contributors ([list][contributors]).
> [ricostacruz.com](http://ricostacruz.com) ·
> GitHub [@rstacruz](https://github.com/rstacruz) ·
> Twitter [@rstacruz](https://twitter.com/rstacruz)
[MIT]: http://mit-license.org/
[contributors]: http://github.com/rstacruz/nprogress/contributors
-
Everything: http://shields.io/
-
Version badge (gems, npm): http://badge.fury.io/
-
Dependencies (ruby): http://gemnasium.com/
-
Code quality (ruby): http://codeclimate.com/
-
Test coverage: https://coveralls.io/
Table of Contents generated with DocToc
- Getting started
- Parameter expansions
- Loops
- Functions
- Conditionals
- Arrays
- Dictionaries
- Options
- History
- Miscellaneous
- Also see
title: Bash scripting category: CLI layout: 2017/sheet tags: [Featured] updated: 2020-07-05 keywords:
- Variables
- Functions
- Interpolation
- Brace expansions
- Loops
- Conditional execution
- Command substitution
{: .-three-column}
{: .-intro}
This is a quick reference to getting started with Bash scripting.
- Learn bash in y minutes (learnxinyminutes.com)
- Bash Guide (mywiki.wooledge.org)
#!/usr/bin/env bash
NAME="John"
echo "Hello $NAME!"
NAME="John"
echo $NAME
echo "$NAME"
echo "${NAME}!"
NAME="John"
echo "Hi $NAME" #=> Hi John
echo 'Hi $NAME' #=> Hi $NAME
echo "I'm in $(pwd)"
echo "I'm in `pwd`"
# Same
git commit && git push
git commit || echo "Commit failed"
{: id='functions-example'}
get_name() {
echo "John"
}
echo "You are $(get_name)"
See: Functions
{: id='conditionals-example'}
if [[ -z "$string" ]]; then
echo "String is empty"
elif [[ -n "$string" ]]; then
echo "String is not empty"
fi
See: Conditionals
set -euo pipefail
IFS=$'\n\t'
See: Unofficial bash strict mode
echo {A,B}.js
Expression | Description |
---|---|
{A,B} |
Same as A B
|
{A,B}.js |
Same as A.js B.js
|
{1..5} |
Same as 1 2 3 4 5
|
See: Brace expansion
{: .-three-column}
name="John"
echo ${name}
echo ${name/J/j} #=> "john" (substitution)
echo ${name:0:2} #=> "Jo" (slicing)
echo ${name::2} #=> "Jo" (slicing)
echo ${name::-1} #=> "Joh" (slicing)
echo ${name:(-1)} #=> "n" (slicing from right)
echo ${name:(-2):1} #=> "h" (slicing from right)
echo ${food:-Cake} #=> $food or "Cake"
length=2
echo ${name:0:length} #=> "Jo"
See: Parameter expansion
STR="/path/to/foo.cpp"
echo ${STR%.cpp} # /path/to/foo
echo ${STR%.cpp}.o # /path/to/foo.o
echo ${STR%/*} # /path/to
echo ${STR##*.} # cpp (extension)
echo ${STR##*/} # foo.cpp (basepath)
echo ${STR#*/} # path/to/foo.cpp
echo ${STR##*/} # foo.cpp
echo ${STR/foo/bar} # /path/to/bar.cpp
STR="Hello world"
echo ${STR:6:5} # "world"
echo ${STR: -5:5} # "world"
SRC="/path/to/foo.cpp"
BASE=${SRC##*/} #=> "foo.cpp" (basepath)
DIR=${SRC%$BASE} #=> "/path/to/" (dirpath)
Code | Description |
---|---|
${FOO%suffix} |
Remove suffix |
${FOO#prefix} |
Remove prefix |
--- | --- |
${FOO%%suffix} |
Remove long suffix |
${FOO##prefix} |
Remove long prefix |
--- | --- |
${FOO/from/to} |
Replace first match |
${FOO//from/to} |
Replace all |
--- | --- |
${FOO/%from/to} |
Replace suffix |
${FOO/#from/to} |
Replace prefix |
# Single line comment
: '
This is a
multi line
comment
'
Expression | Description |
---|---|
${FOO:0:3} |
Substring (position, length) |
${FOO:(-3):3} |
Substring from the right |
Expression | Description |
---|---|
${#FOO} |
Length of $FOO
|
STR="HELLO WORLD!"
echo ${STR,} #=> "hELLO WORLD!" (lowercase 1st letter)
echo ${STR,,} #=> "hello world!" (all lowercase)
STR="hello world!"
echo ${STR^} #=> "Hello world!" (uppercase 1st letter)
echo ${STR^^} #=> "HELLO WORLD!" (all uppercase)
Expression | Description |
---|---|
${FOO:-val} |
$FOO , or val if unset (or null) |
${FOO:=val} |
Set $FOO to val if unset (or null) |
${FOO:+val} |
val if $FOO is set (and not null) |
${FOO:?message} |
Show error message and exit if $FOO is unset (or null) |
Omitting the :
removes the (non)nullity checks, e.g. ${FOO-val}
expands to val
if unset otherwise $FOO
.
{: .-three-column}
for i in /etc/rc.*; do
echo $i
done
for ((i = 0 ; i < 100 ; i++)); do
echo $i
done
for i in {1..5}; do
echo "Welcome $i"
done
for i in {5..50..5}; do
echo "Welcome $i"
done
cat file.txt | while read line; do
echo $line
done
while true; do
···
done
{: .-three-column}
myfunc() {
echo "hello $1"
}
# Same as above (alternate syntax)
function myfunc() {
echo "hello $1"
}
myfunc "John"
myfunc() {
local myresult='some value'
echo $myresult
}
result="$(myfunc)"
myfunc() {
return 1
}
if myfunc; then
echo "success"
else
echo "failure"
fi
Expression | Description |
---|---|
$# |
Number of arguments |
$* |
All postional arguments (as a single word) |
$@ |
All postitional arguments (as separate strings) |
$1 |
First argument |
$_ |
Last argument of the previous command |
Note: $@
and $*
must be quoted in order to perform as described.
Otherwise, they do exactly the same thing (arguments as separate strings).
See Special parameters.
{: .-three-column}
Note that [[
is actually a command/program that returns either 0
(true) or 1
(false). Any program that obeys the same logic (like all base utils, such as grep(1)
or ping(1)
) can be used as condition, see examples.
Condition | Description |
---|---|
[[ -z STRING ]] |
Empty string |
[[ -n STRING ]] |
Not empty string |
[[ STRING == STRING ]] |
Equal |
[[ STRING != STRING ]] |
Not Equal |
--- | --- |
[[ NUM -eq NUM ]] |
Equal |
[[ NUM -ne NUM ]] |
Not equal |
[[ NUM -lt NUM ]] |
Less than |
[[ NUM -le NUM ]] |
Less than or equal |
[[ NUM -gt NUM ]] |
Greater than |
[[ NUM -ge NUM ]] |
Greater than or equal |
--- | --- |
[[ STRING =~ STRING ]] |
Regexp |
--- | --- |
(( NUM < NUM )) |
Numeric conditions |
Condition | Description |
---|---|
[[ -o noclobber ]] |
If OPTIONNAME is enabled |
--- | --- |
[[ ! EXPR ]] |
Not |
[[ X && Y ]] |
And |
`[[ X |
Condition | Description |
---|---|
[[ -e FILE ]] |
Exists |
[[ -r FILE ]] |
Readable |
[[ -h FILE ]] |
Symlink |
[[ -d FILE ]] |
Directory |
[[ -w FILE ]] |
Writable |
[[ -s FILE ]] |
Size is > 0 bytes |
[[ -f FILE ]] |
File |
[[ -x FILE ]] |
Executable |
--- | --- |
[[ FILE1 -nt FILE2 ]] |
1 is more recent than 2 |
[[ FILE1 -ot FILE2 ]] |
2 is more recent than 1 |
[[ FILE1 -ef FILE2 ]] |
Same files |
# String
if [[ -z "$string" ]]; then
echo "String is empty"
elif [[ -n "$string" ]]; then
echo "String is not empty"
else
echo "This never happens"
fi
# Combinations
if [[ X && Y ]]; then
...
fi
1 function createCounter() {
2 let counter = 0;
3
4 return function() {
5 counter += 1;
6 return counter;
7 }
8 }
9
10 var counter = createCounter();
11
12 console.log("counter1: " + counter());
13 console.log("counter1: " + counter());
14
15 const counter2 = createCounter();
16 console.log("counter2: " + counter2());
t0: before line 1
{
counter: undefined
}
t1: after line 1
{
counter: undefined,
createCounter: [Function#1#createCounter]
}
t2: line 10
{
counter: [Function#2#anon],
createCounter: [Function#1#createCounter],
CCFS: { // createCounterFunctionScope
counter: 0,
anon: [Function#2#anon]
}
}
t3: after line 12
{
counter: [Function#2#anon],
createCounter: [Function#1#createCounter],
CCFS: { // createCounterFunctionScope
counter: 1,
anon: [Function#2#anon]
AFS: { // anonymousFunctionScope
}
}
}
t4: after line 13
{
counter: [Function#2#anon],
createCounter: [Function#1#createCounter],
CCFS: { // createCounterFunctionScope
counter: 2,
anon: [Function#2#anon],
AFS: { // returned
},
AFS2: { // anonymousFunctionScope
}
}
}
t5: after line 15
{
counter: [Function#2#anon],
createCounter: [Function#1#createCounter],
CCFS: { // createCounterFunctionScope
counter: 2,
anon: [Function#2#anon],
AFS: { // returned
},
AFS2: { // returned
}
},
CCFS2: { // createCounterFunctionScope #2!
counter: 0,
anon: [Function#3#anon]
},
counter2: [Function#3#anon]
}
t6: after line 16
{
counter: [Function#2#anon],
createCounter: [Function#1#createCounter],
CCFS: { // createCounterFunctionScope
counter: 2,
anon: [Function#2#anon],
AFS: { // returned
},
AFS2: { // returned
}
},
CCFS2: { // createCounterFunctionScope #2!
counter: 1,
anon: [Function#3#anon],
AFS: { // anonymousFunctionScope
},
},
counter2: [Function#3#anon]
}
// Scope
// scope answers the question of where are my functions and variables available to me
const cohort = 'Web43';
console.log( cohort )
// const and let are not attached to the window object but var is
// global variables are defined outside of functions or blocks of code and would be available to me anywhere in my program
// functions are scoped similar to the way variable are scoped
let study = 'HTML and CSS';
function printThree() {
let study = 'JavaScript';
return `We are studying ${study}`;
}
console.log( printThree() );
console.log(study)
/*
| 12: 18: 41 | bryan @LAPTOP - 9 LGJ3JGS: [ d1 ] d1_exitstatus: 1 __________________________________________________________o >
node w3d1.js
Web43
We are studying JavaScript
HTML and CSS
*/
// const dog = 'Ada'
// function callDog () {
// console.log( dog );
// callDog();
// }
// puppy();
//var can be redeclared and updated
/*
var
- can be redecleared
- can be updated
- is function scoped
let
- cannot be redecleared
- can be updated
- is block scoped
const
- cannot be redecleared
- cannot be updated
- is block scoped
*/
if ( 1 === 1 ) {
var answer = true;
} // these {} are a block of code and let and const cannot escape them
// console.log(answer);
for ( let i = 0; i < 5; i++ ) {
console.log( i );
}
if(1 === 1){
var answer = true;
} // these {} are a block of code and let and const cannot escape them
// console.log(answer);
// for(let i = 0; i < 5; i++){
// console.log(i);
// }
// console.log(i);
/*using const until we can't and then let but avoid var*/
function sayHi(name){
var hello = 'hi';
function yell(){
console.log(name.toUpperCase());
}
yell();
}
sayHi('Natalie');
// yell();
console.log(hello);
Variables are used to store information to be referenced and manipulated in a computer program. A variable's sole purpose is to label and store data in computer memory. Up to this point we've been using the let
keyword as our only way of declaring a JavaScript variable. It's now time to expand your tool set to learn about the different kinds of JavaScript variables you can use!
When you finish this reading, you should be able to:
-
Identify the three keywords used to declare a variable in JavaScript
-
Explain the differences between
const
,let
andvar
-
Identify the difference between function and block-scoped variables
-
Paraphrase the concept of hoisting in regards to function and block-scoped
variables
All the code you write in JavaScript is evaluated. A variable always evaluates to the value it contains no matter how you declare it.
In the beginning there was var
. The var
keyword used to be the only way to declare a JavaScript variable. However, in ECMAScript 2015 JavaScript introduced two new ways of declaring JavaScript variables: let
and const
. Meaning, in JavaScript there are three different ways to declare a variable. Each of these keywords has advantages and disadvantages and we will now talk about each keyword at length.
-
let
: any variables declared with the keywordlet
allows you to reassignthat variable. Variable declared using
let
is scoped within a block. -
const
: any variables declared with the keywordconst
_will not allow youto reassign_ that variable. Variable declared using
const
is scoped withina block.
-
var
: Avar
declared variable may or may not be reassigned, and thevariable is scoped to a function.
For this course and for your programming career moving forward we recommend you always use let
& const
. These two words allow us to be the most clear with our intentions for the variable we are creating.
A wonderful definition of hoisting by Mabishi Wakio, "Hoisting is a JavaScript mechanism where variables and function declarations are moved to the top of their scope before code execution."
What this means is that when you run JavaScript code the variables and function declarations will be hoisted to the top of their particular scope. This is important because const
and let
are block-scoped while var
is function-scoped.
Let's start by talking more about all const
, let
, and var
before we dive into why the difference of scopes and hoisting is important.
When JavaScript was young the only available variable was var
. The var
keyword creates function-scoped variables. That means when you use the var
keyword to declare a variable that variable will be confined to the scope of the current function.
Here is a simple example of declaring a var
variable within a function:
function test() {
var a = 10;
console.log(a); // => 10
}
One of the drawbacks of using var
is that it is a less indicative way of defining a variable.
Hoisting with function-scoped variables
Let's take a look at what hoisting does to a function-scoped variable:
function test() {
console.log(hoistedVar); // => undefined
var hoistedVar = 10;
}
test();
Huh - that's weird. You'd expect an error from referring to a variable like hoistedVar
before it's defined, something like: ReferenceError: hoistedVar is not defined
. However this is not the case because of hoisting in JavaScript!
So essentially hoisting will isolate and, in the computer's memory, will declare a variable as the top of it's scope. With a function-scoped variable, var
, the name of the variable will be hoisted to the top of the function. In the above snippet, since hoistedVar
is declared using the var
keyword the hoistedVar
's scope is the test
function. To be clear what is being hoisted is the declaration, not the assignment itself.
In JavaScript, all variables defined with the var
keyword have an initial value of undefined
. Here is a translation of how JavaScript would deal with hoisting in the above test
function:
function test() {
// JavaScript will declare the variable *in computer memory* at the top of it's scope
var hoistedVar;
// since hoisting declared the variable above we now get
// the value of 'undefined'
console.log(hoistedVar); // => undefined
var hoistedVar = 10;
}
When you are declaring a variable with the keyword let
or const
you are declaring a variable that exists within block scope. Blocks in JavaScript are denoted by curly braces({}
). The following examples create a block scope: if
statements, while
loops, switch
statements, and for
loops.
Using the keyword let
We can use let
to declare re-assignable block-scoped variables. You are, of course, very familiar with let
so let's take a look at how let
works within a block scope:
function blockScope() {
let test = "upper scope";
if (true) {
let test = "lower scope";
console.log(test); // "lower scope"
}
console.log(test); // "upper scope"
}
In the example above we can see that the test
variable was declared twice using the keyword let
but since they were declared within different scopes they have different values.
JavaScript will raise a SyntaxError
if you try to declare the same let
variable twice in one block.
if (true) {
let test = "this works!";
let test = "nope!"; // Identifier 'test' has already been declared
}
Whereas if you try the same example with var
:
var test = "this works!";
var test = "nope!";
console.log(test); // prints "nope!"
We can see above that var
will allow you to redeclare a variable twice which can lead to some very confusing and frustrating debugging.
Feel free to peruse the documentation for the keyword let
for more examples.
Using the keyword const
We use const
to declare block-scoped variables that can not be reassigned. In JavaScript variables that cannot be reassigned are called constants. Constants should be used for values that will not be re-declared or re-assigned.
Properties of constants:
-
They are block-scoped like
let
. -
JavaScript enforces constants by raising an error if you try to reassign them.
-
Trying to redeclare a constant with a
var
orlet
by the same name willalso raise an error.
Let's look at a quick example of what happens when trying to reassign a constant:
> const favFood = "cheeseboard pizza"; // Initializes a constant
undefined
> const favFood = "inferior food"; // Re-initialization raises an error
TypeError: Identifier 'favFood' has already been declared
> let favFood = "other inferior food"; // Re-initialization raises an error
TypeError: Identifier 'favFood' has already been declared
> favFood = "deep-dish pizza"; // Re-assignment raises an error
TypeError: Assignment to constant variable.
We cannot reassign a constant, but constants that are assigned to Reference types are mutable. The name binding of a constant is immutable. For example, if we set a constant equal to an Reference type like an object, we can still modify that object:
const animals = {};
animals.big = "beluga whale"; // This works!
animals.small = "capybara"; // This works!
animals = { big: "beluga whale" }; // Will error because of the reassignment
Constants cannot be reassigned but, just like with let
, new constants of the same names can be declared within nested scopes.
Take a look at the following for an example:
const favFood = "cheeseboard pizza";
console.log(favFood);
if (true) {
// This works! Declaration is scoped to the `if` block
const favFood = "noodles";
console.log(favFood); // Prints "noodles"
}
console.log(favFood); // Prints 'cheeseboard pizza'
Just like with let
when you use const
twice in the same block JavaScript will raise a SyntaxError
.
if (true) {
const test = "this works!";
const test = "nope!"; // SyntaxError: Identifier 'test' has already been declared
}
Hoisting with block-scoped variables
When JavaScript ES6 introduced new ways of declaring a variable using let
and const
the idea of block-level hoisting was also introduced. Block scope hoisting allows developers to avoid previous debugging debacles that naturally happened from using var
.
Let's take a look at what hoisting does to a block-scoped variable:
if (true) {
console.log(str); // => Uncaught ReferenceError: Cannot access 'str' before initialization
const str = "apple";
}
Looking at the above we can see that an explicit error is thrown if you attempt to use a block-scoped variable before it was declared. This is the typical behavior in a lot of programming languages - that a variable cannot be referred to until initialized to a value.
However, JavaScript is still performing hoisting with block-scoped declared variables. The difference lies is how it initializes them. Meaning that let
and const
variables are not initialized to the value of undefined
.
The time before a let
or const
variable is declared, but not used is called the Temporal Dead Zone. A very cool name for a simple idea. Variables declared using let
and const
are not initialized until their definitions are evaluated. Meaning, you will get an error if you try to reference a let
or const
declared variable before it is evaluated.
Let's look at one more example that should illuminate the presence of the Temporal Dead Zone:
var str = "not apple";
if (true) {
console.log(str); //Uncaught ReferenceError: Cannot access 'str' before initialization
let str = "apple";
}
In the above example we can see that inside the if
block the let
declared variable, str
, throws an error. Showing that the error thrown by a let
variable in the temporal dead zone takes precedence over any scope chaining that would attempt to go to the outer scope to find a value for the str
variable.
Let's now take a deeper look at the comparison of using function vs. block scoped variables.
Let's start with a simple example:
function partyMachine() {
var string = "party";
console.log("this is a " + string);
}
Looks good so far but let's take that example a step farther and see some of the less fun parts of the var
keyword in terms of scope:
function partyMachine() {
var string = "party";
if (true) {
// since var is not block-scoped and not constant
// this assignment sticks!
var string = "bummer";
}
console.log("this is a " + string);
}
partyMachine(); // => "this is a bummer"
We can see in the above example how the flexibility of var
can ultimately be a bad thing. Since var
is function-scoped and can be reassigned and re-declared without error it is very easy to overwrite variable values by accident.
This is the problem that ES6 introduced let
and const
to solve. Since let
and const
are block-scoped it's a lot easier to avoid accidentally overwriting variable values.
Let's take a look at the example function above rewritten using let
and const
:
function partyMachine() {
const string = "party";
if (true) {
// this variable is restricted to the scope of this block
const string = "bummer";
}
console.log("this is a " + string);
}
partyMachine(); // => "this is a party"
If you leave off a declaration when initializing a variable, it will become a global. Do not do this. We declare variables using the keywords var
, let
, and const
to ensure that our variables are declared within a proper scope. Any variables declared without these keywords will be declared on the global scope.
JavaScript has a single global scope, which means all of the files from your projects and any libraries you use will all be sharing the same scope. Every time a variable is declared on the global scope, the chance of a name collision increases. If we are unaware of the global variables in our code, we may accidentally overwrite variables.
Let's look at a quick example showing why this is a bad idea:
function good() {
let x = 5;
let y = "yay";
}
function bad() {
y = "Expect the unexpected (eg. globals)";
}
function why() {
console.log(y); // "Expect the unexpected (eg. globals)""
console.log(x); // Raises an error
}
why();
Limiting global variables will help you create code that is much more easily maintainable. Strive to write your functions so that they are self-contained and not reliant on outside variables. This will also be a huge help in allowing us test each function by itself.
One of our jobs as programmers is to write code that can be integrated easily within a team. In order to do that, we need to limit the number of globally declared variables in our code as much as possible, to avoid accidental name collisions.
Sloppy programmers use global variables, and you are not working so hard in order to be a sloppy programmer!
The scope of a program in JavaScript is the set of variables that are available for use within the program. If a variable or other expression is not in the current scope, then it is unavailable for use. If we declare a variable, this variable will only be valid in the scope where we declared it. We can have nested scopes, but we'll see that in a little bit.
When we declare a variable in a certain scope, it will evaluate to a specific value in that scope. We have been using the concept of scope in our code all along! Now we are just giving this concept a name.
By the end of this reading you should be able to predict the evaluation of code that utilizes local scope, block scope, lexical scope, and scope chaining
Before we start talking about different types of scope we'll be talking about the two main advantages that scope gives us:
-
Security - Scope adds security to our code by ensuring that variables can
only be accessed by pre-defined parts of our programs.
-
Reduced Variable Name Collisions - Scope reduces variable name
collisions, also known as namespace collisions, by ensuring you can use the
same variable name multiple times in different scopes without accidentally
overwriting those variable's values.
There are three types of scope in JavaScript: global scope
, local scope
, and block scope
.
Let's start by talking about the widest scope there is: global scope. The global scope is represented by the window
object in the browser and the global
object in Node.js. Adding attributes to these objects makes them available throughout the entire program. We can show this with a quick example:
let myName = "Apples";
console.log(myName);
// this myName references the myName variable from this scope,
// so myName will evaluate to "Apples"
The variable myName
above is not inside a function, it is just lying out in the open in our code. The myName
variable is part of global scope. The Global scope is the largest scope that exists, it is the outermost scope that exists.
While useful on occasion, global variables are best avoided. Every time a variable is declared on the global scope, the chance of a name collision increases. If we are unaware of the global variables in our code, we may accidentally overwrite variables.
The scope of a function is the set of variables that are available for use within that function. We call the scope within a function: local scope. The local scope of a function includes:
- the function's arguments
- any local variables declared inside the function
- any variables that were already declared when the function was defined
In JavaScript when we enter a new function we enter a new scope:
// global scope
let myName = "global";
function function1() {
// function1's scope
let myName = "func1";
console.log("function1 myName: " + myName);
}
function function2() {
// function2's scope
let myName = "func2";
console.log("function2 myName: " + myName);
}
function1(); // function1 myName: func1
function2(); // function2 myName: func2
console.log("global myName: " + myName); // global myName: global
In the code above we are dealing with three different scopes: the global scope, function1
, and function2
. Since each of the myName
variables were declared in separate scopes, we are allowed to reuse variable names without any issues. This is because each of the myName
variables is bound to their respective functions.
A block in JavaScript is denoted by a pair of curly braces ({}
). Examples of block statements in JavaScript are if
conditionals or for
and while
loops.
When using the keywords let
or const
the variables defined within the curly braces will be block scoped. Let's look at an example:
// global scope
let dog = "woof";
// block scope
if (true) {
let dog = "bowwow";
console.log(dog); // will print "bowwow"
}
console.log(dog); // will print "woof"
A key scoping rule in JavaScript is the fact that an inner scope does have access to variables in the outer scope.
Let's look at a simple example:
let name = "Fiona";
// we aren't passing in or defining and variables
function hungryHippo() {
console.log(name + " is hungry!");
}
hungryHippo(); // => "Fiona is hungry"
So when the hungryHippo
function is declared a new local scope will be created for that function. Continuing on that line of thought what happens when we refer to name
inside of hungryHippo
? If the name
variable is not found in the immediate scope, JavaScript will search all of the accessible outer scopes until it finds a variable name that matches the one we are referencing. Once it finds the first matching variable, it will stop searching. In JavaScript this is called scope chaining.
Now let's look at an example of scope chaining with nested scope. Just like functions in JavaScript, a scope can be nested within another scope. Take a look at the example below:
// global scope
let person = "Rae";
// sayHello function's local scope
function sayHello() {
let person = "Jeff";
// greet function's local scope
function greet() {
console.log("Hi, " + person + "!");
}
greet();
}
sayHello(); // logs 'Hi, Jeff!'
In the example above, the variable person
is referenced by greet
, even though it was never declared within greet
! When this code is executed JavaScript will attempt to run the greet
function - notice there is no person
variable within the scope of the greet
function and move on to seeing if that variable is defined in an outer scope.
Notice that the greet
function prints out Hi, Jeff!
instead of Hi, Rae!
. This is because JavaScript will start at the inner most scope looking for a variable named person
. Then JavaScript will work it's way outward looking for a variable with a matching name of person
. Since the person
variable within sayHello
is in the next level of scope above greet
JavaScript then stops it's scope chaining search and assigns the value of the person
variable.
Functions such as greet
that use (ie. capture) variables like the person variable are called closures. We'll be talking a lot more about closures very soon!
Important An inner scope can reference outer variables, but an outer scope cannot reference inner variables:
function potatoMaker() {
let name = "potato";
console.log(name);
}
potatoMaker(); // => "potato"
console.log(name); // => ReferenceError: name is not defined
There is one last important concept to talk about when we refer to scope - and that is lexical scope. Whenever you run a piece of JavaScript that code is first parsed before it is actually run. This is known as the lexing time. In the lexing time your parser resolves variable names to their values when functions are nested.
The main take away is that lexical scope is determined at lexing time so we can determine the values of variables without having to run any code. JavaScript is a language without dynamic scoping. This means that by looking at a piece of code we can determine the values of variables just by looking at the different scopes involved.
Let's look at a quick example:
function outer() {
let x = 5;
function inner() {
// here we know the value of x because scope chaining will
// go into the scope above this one looking for variable named x.
// We do not need to run this code in order to determine the value of x!
console.log(x);
}
inner();
}
In the inner
function above we don't need to run the outer
function to know what the value of x
will be because of lexical scoping.
The scope of a program in JavaScript is the set of variables that are available for use within the program. Due to lexical scoping we can determine the value of a variable by looking at various scopes without having to run our code. Scope Chaining allows code within an inner scope to access variables declared in an outer scope.
There are three different scopes:
- global scope - the global space is JavaScript
- local scope - created when a function is defined
- block scope - created by entering a pair of curly braces
Languages |
|
| Libraries |
|
| Frameworks |
|
| Databases |
|
| Testing |
|
| Other |
|
- GitHub
- Gitlab
- Bitbucket
- code pen
- Glitch
- Replit
- Redit
- runkit
- stack-exchange
- Netlify
- Medium
- webcomponents.dev
- npm
- Upwork
- AngelList
- Quora
- dev.to
- Observable Notebooks
- Notation
- StackShare
- Plunk
- Dribble
➤ Blog:
I write articles for:

About Me

-
🔭 Contract Web Development Relational Concepts
-
🌱 I'm currently learning React/Redux, Python, Java, Express, jQuery
-
👯 I'm looking to collaborate on Any web audio or open source educational tools.
-
🤝 I'm looking for help with Learning React
-
👨💻 All of my projects are available at https://bgoonz.github.io/
-
📝 I regularly write articles on medium && Web-Dev-Resource-Hub
-
💬 Ask me about Anything:
-
📫 How to reach me bryan.guner@gmail.com
-
⚡ Fun fact I played Bamboozle Music Festival at the Meadowlands Stadium Complex when I was 14.
A Random Walk Down Wall Street
Hitchhiker's Guide To The Galaxy
Designing recording software/hardware and using it
Try harder and listen to your parents more (the latter bit of advice would be almost certain to fall on deaf ears lol)
I built a platform that listens to a guitarist's performance and automatically triggers guitar effects at the appropriate time in the song.
Is it to basic to say Tesla... I know they're prevalent now but I've been an avid fan since as early as 2012.
Having really good ideas and forgetting them moments later.
A text
Creating things that change my every day life.
Modern Physics... almost changed my major after that class... but at the end of the day engineering was a much more fiscally secure avenue.
Learned to code ... and sing
*Disclaimer: The following wisdom is very cliche ... but... "Be the change that you wish to see in the world."
― Mahatma Gandhi
🤖 My Programming Stats:


Resume
Programming** Languages:** | JavaScript ES-6, NodeJS, React, HTML5, CSS3, SCSS, Bash Shell, Excel, SQL, NoSQL, MATLAB, Python, C++ |
---|---|
Databases: | PostgreSQL, MongoDB |
Cloud: | Docker, AWS, Google App Engine, Netlify, Digital Ocean, Heroku, Azure Cloud Services |
OS: | Linux, Windows (WSL), IOS |
Agile: | GitHub, BitBucket, Jira, Confluence |
IDEs: | VSCode, Visual Studio, Atom, Code Blocks, Sublime Text 3, Brackets |
Relational Concepts: Hallandale Beach, FL | March 2020 - Present |
---|---|
Front End Web Developer | |
- Responsible for front-end development for a custom real estate application which provides sophisticated and fully customizable filtering to allow investors and real estate professionals to narrow in on exact search targets.
- Designed mock-up screens, wireframes, and workflows for intuitive user experience.
- Migrated existing multi-page user experience into singular page interfaces using React components.
- Participated in every stage of the design from conception through development and iterative improvement.
- Produced user stories and internal documentation for future site development and maintenance.
- Implemented modern frameworks including Bootstrap and Font-Awesome to give the site an aesthetic overhaul.
- Managed all test deployments using a combination of Digital Ocean and Netlify.
- Produced unit tests using a combination of Mocha and Chai.
- Injected Google Analytics to capture pertinent usage data to produce an insightful dashboard experience.
Environment: | JavaScript, JQuery, React, HTML5 & CSS, Bootstrap, DOJO, Google Cloud, Bash Script |
---|
Cembre: Edison, NJ | Nov 2019 – Mar 2020 |
---|---|
Product Development Engineer | |
- Converted client' s product needs into technical specs to be sent to the development team in Italy.
- Reorganized internal file server structure.
- Conducted remote / in person system integration and product demonstrations.
- Presided over internal and end user software trainings in addition to producing the corresponding documentation.
- Served as the primary point of contact for troubleshooting railroad hardware and software in the North America.
Environment: | Excel, AutoCAD, PowerPoint, Word |
---|
**B. S. Electrical Engineering, TCNJ, ** Ewing NJ | 2014 – 2019 |
---|
Capstone Project – Team Lead
- Successfully completed and delivered a platform to digitize a guitar signal and perform filtering before executing frequency & time domain analysis to track a current performance against prerecorded performance.
- Implemented the Dynamic Time Warping algorithm in C++ and Python to autonomously activate or adjust guitar effect at multiple pre-designated section of performance.
Environment: | C++, Python, MATLAB, PureData |
---|
My Projects
<tr>
<th>Project Name</th>
<th>Skills used</th>
<th>Description</th>
</tr>
<tr>
<td><a href='https://web-dev-resource-hub.netlify.app/'>Web-Dev-Resource-Hub (blog)</a></td>
<td>Html, Css, javascript, Python, jQuery, React, FireBase, AWS S3, Netlify, Heroku, NodeJS, PostgreSQL, C++, Web Audio API</td>
<td>My blog site contains my resource sharing and blog site ... centered mostly on web development and just a bit of audio production / generally nerdy things I find interesting.</td>
</tr>
<tr>
<td><a href='https://project-showcase-bgoonz.netlify.app/'>Dynamic Guitar Effects Triggering Using A Modified Dynamic Time Warping Algorithm</a></td>
<td>C, C++, Python, Java, Pure Data, Matlab</td>
<td>Successfully completed and delivered a platform to digitize a guitar signal and perform filtering before executing frequency & time domain analysis to track a current performance against prerecorded performance.Implemented the Dynamic Time Warping algorithm in C++ and Python to autonomously activate or adjust guitar effect at multiple pre-designated section of performance.</td>
</tr>
<tr>
<td><a href="https://trusting-dijkstra-4d3b17.netlify.app/">Data Structures & Algorithms Interactive Learning Site</a></td>
<td>HTML, CSS, Javascript, Python, Java, jQuery, Repl.it-Database API</td>
<td>A interactive and comprehensive guide and learning tool for DataStructures and Algorithms ... concentrated on JS but with some examples in Python, C++ and Java as well</td>
</tr>
<tr>
<td><a href='https://mihirbegmusic.netlify.app/'>MihirBeg.com</a></td>
<td>Html, Css, Javascript, Bootstrap, FontAwesome, jQuery</td>
<td>A responsive and mobile friendly content promotion site for an Audio Engineer to engage with fans and potential clients</td>
</tr>
<tr>
<td><a href='https://tetris42.netlify.app/'>Tetris-JS</a></td>
<td>Html, Css, Javascript</td>
<td>The classic game of tetris implemented in plain javascipt and styled with a retro-futureistic theme</td>
</tr>
<tr>
<td><a href="https://githtmlpreview.netlify.app/">Git Html Preview Tool</a></td>
<td>Git, Javascript, CSS3, HTML5, Bootstrap, BitBucket</td>
<td>Loads HTML using CORS proxy, then process all links, frames, scripts and styles, and load each of them using CORS proxy, so they can be evaluated by the browser.</td>
</tr>
<tr>
<td><a href='https://project-showcase-bgoonz.netlify.app/'>Mini Project Showcase</a></td>
<td>HTML, HTML5, CSS, CSS3, Javascript, jQuery</td>
<td>add songs and play music, it also uses to store data in INDEXEDB Database by which we can play songs, if we not clear the catch then song will remain stored in database.</td>
</tr>
the method string.replaceAll(search, replaceWith) replaces all appearances of search string with replaceWith.
const str = 'this is a JSsnippets example';
const updatedStr = str.replace('example', 'snippet'); // 'this is a JSsnippets snippet'
The tricky part is that replace method replaces only the very first match of the substring we have passed:
const str = 'this is a JSsnippets example and examples are great';
const updatedStr = str.replace('example', 'snippet'); //'this is a JSsnippets snippet and examples are great'
In order to go through this, we need to use a global regexp instead:
const str = 'this is a JSsnippets example and examples are great';
const updatedStr = str.replace(/example/g, 'snippet'); //'this is a JSsnippets snippet and snippets are greatr'
but now we have new friend in town, replaceAll
const str = 'this is a JSsnippets example and examples are great';
const updatedStr = str.replaceAll('example', 'snippet'); //'this is a JSsnippets snippet and snippets are greatr'
def fib_iter(n):
if n == 0:
return 0
if n == 1:
return 1
p0 = 0
p1 = 1
for i in range(n-1):
next_val = p0 + p1
p0 = p1
p1 = next_val
return next_val
for i in range(10):
print(f'{i}: {fib_iter(i)}')
def quicksort(l):
# One of our base cases is an empty list or list with one element
if len(l) == 0 or len(l) == 1:
return l
# If we have a left list, a pivot point and a right list...
# assigns the return values of the partition() function
left, pivot, right = partition(l)
# Our sorted list looks like left + pivot + right, but sorted.
# Pivot has to be in brackets to be a list, so python can concatenate all the elements to a single list
return quicksort(left) + [pivot] + quicksort(right)
print(quicksort([]))
print(quicksort([1]))
print(quicksort([1,2]))
print(quicksort([2,1]))
print(quicksort([2,2]))
print(quicksort([5,3,9,4,8,1,7]))
print(quicksort([1,2,3,4,5,6,7]))
print(quicksort([9,8,7,6,5,4,3,2,1]))
See Older Snippets!
will replace any spaces in file names with an underscore!
for file in *; do mv "$file" `echo $file | tr ' ' '_'` ; done
## TAKING IT A STEP FURTHER:
# Let's do it recursivley:
function RecurseDirs ()
{
oldIFS=$IFS
IFS=$'\n'
for f in "$@"
do
# YOUR CODE HERE!
[]
for file in *; do mv "$file" `echo $file | tr ' ' '_'` ; done
if [[ -d "${f}" ]]; then
cd "${f}"
RecurseDirs $(ls -1 ".")
cd ..
fi
done
IFS=$oldIFS
}
RecurseDirs "./"
Language: Javascript/Jquery
In combination with the script tag : <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script> , this snippet will add a copy to clipboard button to all of your embedded
blocks.
$(document).ready(function() {
$('code, pre').append('<span class="command-copy" ><i class="fa fa-clipboard" aria-hidden="true"></i></span>');
$('code span.command-copy').click(function(e) {
var text = $(this).parent().text().trim(); //.text();
var copyHex = document.createElement('input');
copyHex.value = text
document.body.appendChild(copyHex);
copyHex.select();
document.execCommand('copy');
console.log(copyHex.value)
document.body.removeChild(copyHex);
});
$('pre span.command-copy').click(function(e) {
var text = $(this).parent().text().trim();
var copyHex = document.createElement('input');
copyHex.value = text
document.body.appendChild(copyHex);
copyHex.select();
document.execCommand('copy');
console.log(copyHex.value)
document.body.removeChild(copyHex);
});
})
//APPEND-DIR.js
const fs = require('fs');
let cat = require('child_process').execSync('cat *').toString('UTF-8');
fs.writeFile('output.md', cat, (err) => {
if (err) throw err;
});
const isAppleDevice = /Mac|iPod|iPhone|iPad/.test(navigator.platform);
console.log(isAppleDevice);
// Result: will return true if user is on an Apple device
/*
function named intersection(firstArr) that takes in an array and
returns a function.
When the function returned by intersection is invoked
passing in an array (secondArr) it returns a new array containing the elements
common to both firstArr and secondArr.
*/
function intersection(firstArr) {
return (secondArr) => {
let common = [];
for (let i = 0; i < firstArr.length; i++) {
let el = firstArr[i];
if (secondArr.indexOf(el) > -1) {
common.push(el);
}
}
return common;
};
}
let abc = intersection(["a", "b", "c"]); // returns a function
console.log(abc(["b", "d", "c"])); // returns [ 'b', 'c' ]
let fame = intersection(["f", "a", "m", "e"]); // returns a function
console.log(fame(["a", "f", "z", "b"])); // returns [ 'f', 'a' ]
/*
First is recurSum(arr, start) which returns the sum of the elements of arr from the index start till the very end.
Second is partrecurSum() that recursively concatenates the required sum into an array and when we reach the end of the array, it returns the concatenated array.
*/
//arr.length -1 = 5
// arr [ 1, 7, 12, 6, 5, 10 ]
// ind [ 0 1 2 3 4 5 ]
// ↟ ↟
// start end
function recurSum(arr, start = 0, sum = 0) {
if (start < arr.length) {
return recurSum(arr, start + 1, sum + arr[start]);
};
return sum;
}
function rPartSumsArr(arr, partSum = [], start = 0, end = arr.length - 1) {
if (start <= end) {
return rPartSumsArr(arr, partSum.concat(recurSum(arr, start)), ++start, end);
};
return partSum.reverse();
}
console.log('------------------------------------------------rPartSumArr------------------------------------------------')
console.log('rPartSumsArr(arr)=[ 1, 1, 5, 2, 6, 10 ]: ', rPartSumsArr(arr));
console.log('rPartSumsArr(arr1)=[ 1, 7, 12, 6, 5, 10 ]: ', rPartSumsArr(arr1));
console.log('------------------------------------------------rPartSumArr------------------------------------------------')
/*
------------------------------------------------rPartSumArr------------------------------------------------
rPartSumsArr(arr)=[ 1, 1, 5, 2, 6, 10 ]: [ 10, 16, 18, 23, 24, 25 ]
rPartSumsArr(arr1)=[ 1, 7, 12, 6, 5, 10 ]: [ 10, 15, 21, 33, 40, 41 ]
------------------------------------------------rPartSumArr------------------------------------------------
*/
function camelToKebab(value) {
return value.replace(/([a-z])([A-Z])/g, "$1-$2").toLowerCase();
}
function camel(str) {
return str.replace(/(?:^\w|[A-Z]|\b\w|\s+)/g, function(match, index) {
if (+match === 0) return ""; // or if (/\s+/.test(match)) for white spaces
return index === 0 ? match.toLowerCase() : match.toUpperCase();
});
}
function addTwoNumbers(l1, l2) {
let result = new ListNode(0)
let currentNode = result
let carryOver = 0
while (l1 != null || l2 != null) {
let v1 = 0
let v2 = 0
if (l1 != null) v1 = l1.val
if (l2 != null) v2 = l2.val
let sum = v1 + v2 + carryOver
carryOver = Math.floor(sum / 10)
sum = sum % 10
currentNode.next = new ListNode(sum)
currentNode = currentNode.next
if (l1 != null) l1 = l1.next
if (l2 != null) l2 = l2.next
}
if (carryOver > 0) {
currentNode.next = new ListNode(carryOver)
}
return result.next
};
//Function to test if a character is alpha numeric that is faster than a regular
//expression in JavaScript
let isAlphaNumeric = (char) => {
char = char.toString();
let id = char.charCodeAt(0);
if (
!(id > 47 && id < 58) && // if not numeric(0-9)
!(id > 64 && id < 91) && // if not letter(A-Z)
!(id > 96 && id < 123) // if not letter(a-z)
) {
return false;
}
return true;
};
console.log(isAlphaNumeric("A")); //true
console.log(isAlphaNumeric(2)); //true
console.log(isAlphaNumeric("z")); //true
console.log(isAlphaNumeric(" ")); //false
console.log(isAlphaNumeric("!")); //false
function replaceWords(str, before, after) {
if (/^[A-Z]/.test(before)) {
after = after[0].toUpperCase() + after.substring(1)
} else {
after = after[0].toLowerCase() + after.substring(1)
}
return str.replace(before, after)
}
console.log(replaceWords("Let us go to the store", "store", "mall")) //"Let us go to the mall"
console.log(replaceWords("He is Sleeping on the couch", "Sleeping", "sitting")) //"He is Sitting on the couch"
console.log(replaceWords("His name is Tom", "Tom", "john"))
//"His name is John"
/*Simple Function to flatten an array into a single layer */
const flatten = (array) =>
array.reduce(
(accum, ele) => accum.concat(Array.isArray(ele) ? flatten(ele) : ele),
[]
);
const isWeekday = (date) => date.getDay() % 6 !== 0;
console.log(isWeekday(new Date(2021, 0, 11)));
// Result: true (Monday)
console.log(isWeekday(new Date(2021, 0, 10)));
// Result: false (Sunday)
function longestCommonPrefix(strs) {
let prefix = ''
if (strs.length === 0) return prefix
for (let i = 0; i < strs[0].length; i++) {
const character = strs[0][i]
for (let j = 0; j < strs.length; j++) {
if (strs[j][i] !== character) return prefix
}
prefix = prefix + character
}
return prefix
}