On day 5, we went through all of the rest of the material in the course. By the end of the day, we had been provided all of the information we should need in order to pass both MySQL Developer examinations. My notes from the final day of class follow.
Day 5
DECLARE statement
- The declare statement defines items local to a routine
- Local variables
- Conditions and handlers
- Cursors
- Declare only allowed inside a BEGIN…END
- Declarations must follow a specific order
- Variables
- Conditions – error handler – captures errors that occur in the procedure/function and determines how they should be handled
- DECLARE condition_name CONDITION FOR condition_value;
- SQLSTATE Condition value
- DECLARE null_not_allowed CONDITION FOR SQLSTATE ‘23000’;
- MySQL errors are strings, so they need to be quoted
- Continue handler
- DECLARE CONTINUE HANDLER FOR SQLSTATE ‘23000’ SET @x=1;
- A continue handler allows the procedure to continue even if the condition is met
- Exit handler
- An exit handler will stop the procedure/function when the condition is met (an error is generated)
- Cursors
- A control structure within stored routines for record retrieval
- One row at a time
- A cursor is basically a pointer pointing to the current row we are fetching
- Cursors are mostly used within loops that fetch and process the rows
- Asensitive – MySQL decides how to process the information that’s fetched
- Read-only
- Non-scrolling
- The cursor strictly moves from one row to the next
- You cannot skip records
- You cannot go backwards
- Cursors must be declared before declaring handlers
- OPEN opens a previously declared cursor
- FETCH
- obtains the next row using the specified open cursor, and advances the cursor pointer
- When there is no next row, an error will result
- You would use that error as a condition to break out of the loop
- A cursor is destroyed automatically at the end of the BEGIN…END block
- If you choose to close the cursor ahead of time, you can use the CLOSE command
- Prepared statements can only have one cursor, and it is not named
- Handlers
Triggers
- Named database objects
- Activated when
Creating triggers
- CREATE TRIGGER trigger_name { BEFORE | AFTER }
{ INSERT | UPDATE | DELETE }
ON table_name
FOR EACH ROW
triggered_statement- CREATE TRIGGER City_AD AFTER DELETE ON City
FOR EACH ROW
INSERT INTO DeletedCity (ID, Name) VALUES (OLD.ID, OLD.Name);- OLD and NEW are keywords that can be used within triggers
Trigger error handling
MySQL handles errors during trigger execution as follows:
- Failed BEFORE triggers
- Operation on corresponding row is not performed
- After trigger execution
- BEFORE trigger events and the row operation will execute successfully in non-transactional tables
- In transactional tables, a rollback of all changes made by the statement will occur
Disallowed statements
- SQL prepared statements
- Explicit or implicit COMMIT or ROLLBACK
- Return a result set
- FLUSH
- Recursive – cannot modify a table that is in use (reading or writing)
- These types of things must be taken care of in BEFORE statements, rather than AFTER statements, because the table will not yet be open at that point
Storage Engines
SQL Parser and storage engines
- Client sends requests to the server as SQL
- Two-tier processing
- Upper tier includes SQL parser and optimizer
- Lower tier comprises a set of storage engines
- SQL itself is not engine-specific, generally
- Some exceptions include COMMIT, ROLLBACK, CREATE TABLE (which needs an engine declaration and can allow foreign key information for transactional tables), etc.
- Steps to parse and return
- Parse
- Optimize
- Retrieve/Store
- This is where things like ORDER BY, GROUP BY, etc. are performed
- Storage medium
- Transactional capabilities
- InnoDB allows transactions while MyISAM does not
- Locking
- InnoDB locks on a row-by-row basis
- MyISAM locks on a table-by-table basis
- Backup and recovery
- MyISAM allows you to physically copy individual tables, etc. from one place in your filesystem to another place
- InnoDB stores its information for all databases on the server in a single file, so it is more difficult to copy individual items
- Optimization
- Special features
- MySQL server operates same for all storage engines
- SQL commands independent of engine
Available Storage Engines
- MySQL provides and maintains several storage engines
- Also compatible with many third party engines
- MySQL developed:
- MyISAM
- Incredibly fast – even more so when you use compression, fixed-width columns and good indexing
- Data stored in table
- Table-level locking
- Does not support transactions
- Falcon
- NDB/Cluster
- Memory
- Data is in memory
- Fastest engine that exists, but nothing is stored anywhere on disk
- Archive
- Highly compressed
- Only supports INSERT and queries
- Federated
- creates an alias of an existing database on a separate server
- Blackhole
- CSV
- Example – doesn’t do anything, but is in the code to show people a template of how to create an engine
- Third party engines
- InnoDB
- Transactional
- Foreign key constraints
- Row-level locking
- Backups
- solidDB
- InfoBright BrightHouse
- Nitro
- PBXT
You can set the storage engine when creating a table, but you are not required to do so. If you do not specify the engine, it will be set to the database default
MyISAM Storage Engine
- MyISAM is the MySQL default
- Manages tables with specific characteristics
- Represented by three files
- Most flexible AUTO_INCREMENT
- Fast, compressed, read-only tables save space
- Manages contention between queries
- By default, writing to disk takes precedence over reading from disk, but that can be overridden
- SELECT statements can be set to a higher priority with the HIGH PRIORITY switch
- UPDATE, DELETE and INSERT statements can be set to a lower priority with LOW PRIORITY
- INSERT statements can also be set to an extremely low priority with DELAYED
- Portable storage format
- Specify number rows for a table
- Disable updating of non-unique indexes and enable the indexes
- DISABLE KEYS/ENABLE KEYS
- Indexes of non-unique keys can be disabled so that the indexes are not re-created each time a row is changed. The indexes will be rebuilt when the keys are reenabled
- Tables take up very little space
- Three row storage formats
- Fixed-row format
- Dynamic row format
- Compressed format
- Tables must be deliberately compressed
- Compressed tables are read-only
- Tables must be decompressed to modify
- Using the myisampack utility
- Includes a mixture of “True” compression and a set of optimizations
- Each record compressed separately with small cost to decompress
- Must be performed on the host machine
- Use myisamchk afterward to update the indexes
- Always backup tables prior to running utilities
- Table-level locking
- Acquiring locks
- Tables with no holes support concurrent inserts
- If a table is completely defragmented, any number of inserts can all be performed at the same time
- Tables with holes do not support concurrent inserts by default
- Can change priority of statements that retrieve or modify data
- Write request not processed until current readers finished
InnoDB Storage Engine
- Manages tables with specific characteristics
- Represented on disk by a .frm format file as well as data and index storage
- Supports transactions
- ACID compliant
- Satisfies ACID conditions
- General locking properties
- Does not need to set locks to achieve consistent reads
- Uses row-level locking per concurrency properties
- May acquire row locks as necessary to improve concurrency
- Deadlock is possible
- Supports two locking modifiers
- Repeatable Read isolation level allows modifiers
- Auto-recovery after crash
- MVCC and non-locking reads
- Supports foreign keys and referential integrity
- Supports consistent and online logical backup
- Tablespace for storing all table contents together
- Log files for recording transaction activity
- Format file (.frm)
- Logical storage area can contain multiple files
- Table-specific file (.ibd)
- –innodb-file-per-table
- Manages InnoDB-specific log files
- Log files used for auto-recovery
Memory Engine
- Uses tables stored in memory
- Tables are temporary
- Must have fixed-length rows
- Manages tables with specific characteristics
- Formerly HEAP engine
- Memory indexing options
- Uses HASH indexes by default
- BTREE is preferable for some operators
- Ranges
Optimization
- Server processes queries more efficiently and performs better
- Optimization strategies
- Use indexing properly
- Large tables require indexing for efficiency
- Benefits of indexes
- Contain sorted values
- Use less disk I/O
- Enforce uniqueness constraints
- Downsides of indexing
- Any time anything in the table is changed, the indexes have to be rebuilt
- Can slow down some data manipulations
- Uses additional space
- Types of indexes
- Three general types
- Primary key
- Unique
- Allows NULLs, but other than that requires the value of the cell to be different than anything else in the column
- Non-unique
- Allows NULLs and allows duplicate entries
- Primary versus unique
- Primary cannot contain NULL
- Only one primary key is allowed per table
- Adding indexes
- ALTER TABLE table_name ADD PRIMARY KEY(column_name);
- CREATE INDEX index_name ON table_name(column_name);
- Removing indexes
- ALTER TABLE table_name DROP PRIMARY KEY;
- ALTER TABLE table_name DROP INDEX index_name;
- DROP INDEX index_name ON table_name;
- DROP INDEX `PRIMARY` ON table_name;
- PRIMARY must be contained within back tick quotes, because it is a name. If it is not within back ticks, it will be treated as a keyword that doesn’t belong in that syntax
- PRIMARY is the official name of the primary key in a table
- Using index prefixes
- Several column types
- Use only specified, leading part of column values
- Can make the index work faster, because it only searches a specific portion of the column value
- CREATE INDEX part_of_name ON customer (name(10));
- Can be used on composite indexes, as well
- INDEX (column_name1(15),column_name2(10))
- You can show index information by using the SHOW INDEX command
- Full text indexes
- Only useful with MyISAM tables
- Definition can be given with CREATE TABLE, ALTER TABLE, CREATE INDEX
- Create index after table creation for large datasets
- Use MATCH()…AGAINST() syntxt
- SELECT title FROM books WHERE MATCH(title) AGAINST (‘prince potter’)
- Well-written queries
- Using EXPLAIN to determine query processing
- Returns useful information
- Shows if index is required
- Shows if index is being used
- Analyzes query rewrites
- EXPLAIN works with SELECT queries
- EXPLAIN for JOINs
- Joins tend to increase amount of server processing
- EXPLAIN can help reduce server impact
- The type column indicates the join type
- type column output
- system
- const
- eq_ref
- ref
- ref_or_null
- index_merge
- unique_subquery
- index_subquery
- range
- index
- ALL
- The extra column (page 21-22)
- using index
- where used
- distinct
- not exists
- Inefficient query “extra” output
- using filesort
- using temporary
- range checked for each record
- Indicators of worst performance
- using filesort
- using temporary
- Rewrite query and run EXPLAIN again
- Efficiency principles
- No indexed columns within an expression
- Beneficial for joins that compare columns from two tables
- Use same value as column data type
- WHERE id=18 (if column is a number)
- WHERE id=’18’ (if column includes strings)
- Pattern matching
- WHERE name LIKE ‘de%’
- WHERE name >= ‘de’ AND name < ‘df’ – not as good as example above
- WHERE name LIKE ‘%de’ – full table scan
- Rewrite using a trigger that maintains an additional column
- WHERE LENGTH(column)=5 – bad
- WHERE column_length=5 – column_length is a separate column that’s been automatically generated to show the value that would be returned by LENGTH(column)
- Reduce amount of output a query produces
- Use the LIMIT clause
- Reduces information going over the network
- Allows server to terminate query processing earlier
- Use the WHERE clause
- More improvement with index or column
- Techniques for updating tables
- Use DELETE and UPDATE same as SELECT
- Multi-row INSERT
- INSERT INTO t (id, name) VALUES (1,’Bea’),(2,’Belle’),(3,’Bernice’);
- Better performance with a transaction
- LOAD DATA INFILE faster than multi-row INSERT
- Even faster with MyISAM if you disable keys before running the file and enabling them when finished
- Use REPLACE rather than DELETE plus INSERT
- Generating summary tables
- Select records to generate summaries
- Summary table strategy
- Several benefits to this strategy
- Table-locking table will be available more
- The use of the temporary table will not affect the original table from which it was derived
- Consider making a memory table
- CREATE TABLE ContinentGNP
SELECT Continent, AVG(GNP) AS AvgGnp
FROM Country GROUP BY Continent- Disadvantages
- Values are only good until changed
- Storing data twice
- Choose best matching storage engine
- Decide query types during table creation
- Choose storage engine with locking level needed
- InnoDB good for a mix of retrievals and updates
- MyISAM table structure dependent on the higher priority between speed or disk
- Choose fixed-length or variable-length according to needs
- Use read-only tables if possible
- CHAR columns take more space than VARCHAR
- considerably faster in MyISAM, but CHAR is no different (in speed) than VARCHAR in InnoDB
- Use Memory for temporary information
* Within a MyISAM table, you can temporarily disable all non-unique keys so that new indexes will not be built while they are turned off. Use the statement ALTER TABLE table_name DISABLE KEYS. You can turn them back on with ALTER TABLE table_name ENABLE KEYS.
Post Your Comment