Analyze query using given connection and return QueryAnalysis object. Analysis is performed using database specific EXPLAIN ANALYZE construct and then examining the results into structured format. Currently only PostgreSQL is supported.
Getting query runtime (in database level)
from sqlalchemy_utils import analyze
analysis = analyze(conn, 'SELECT * FROM article')
analysis.runtime # runtime as milliseconds
Analyze can be very useful when testing that query doesn’t issue a sequential scan (scanning all rows in table). You can for example write simple performance tests this way.:
query = (
session.query(Article.name)
.order_by(Article.name)
.limit(10)
)
analysis = analyze(self.connection, query)
analysis.node_types # [u'Limit', u'Index Only Scan']
assert 'Seq Scan' not in analysis.node_types
Parameters: |
|
---|
Check if a database exists.
Parameters: | url – A SQLAlchemy engine URL. |
---|
Performs backend-specific testing to quickly determine if a database exists on the server.
database_exists('postgres://postgres@localhost/name') #=> False
create_database('postgres://postgres@localhost/name')
database_exists('postgres://postgres@localhost/name') #=> True
Supports checking against a constructed URL as well.
engine = create_engine('postgres://postgres@localhost/name')
database_exists(engine.url) #=> False
create_database(engine.url)
database_exists(engine.url) #=> True
Issue the appropriate CREATE DATABASE statement.
Parameters: |
|
---|
To create a database, you can pass a simple URL that would have been passed to create_engine.
create_database('postgres://postgres@localhost/name')
You may also pass the url from an existing engine.
create_database(engine.url)
Has full support for mysql, postgres, and sqlite. In theory, other database engines should be supported.
Issue the appropriate DROP DATABASE statement.
Parameters: | url – A SQLAlchemy engine URL. |
---|
Works similar to the create_database method in that both url text and a constructed url are accepted.
drop_database('postgres://postgres@localhost/name')
drop_database(engine.url)
Return whether or not given column has an index. A column has an index if it has a single column index or it is the first column in compound column index.
Parameters: | column – SQLAlchemy Column object |
---|
from sqlalchemy_utils import has_index
class Article(Base):
__tablename__ = 'article'
id = sa.Column(sa.Integer, primary_key=True)
title = sa.Column(sa.String(100))
is_published = sa.Column(sa.Boolean, index=True)
is_deleted = sa.Column(sa.Boolean)
is_archived = sa.Column(sa.Boolean)
__table_args__ = (
sa.Index('my_index', is_deleted, is_archived),
)
table = Article.__table__
has_index(table.c.is_published) # True
has_index(table.c.is_deleted) # True
has_index(table.c.is_archived) # False
Also supports primary key indexes
from sqlalchemy_utils import has_index
class ArticleTranslation(Base):
__tablename__ = 'article_translation'
id = sa.Column(sa.Integer, primary_key=True)
locale = sa.Column(sa.String(10), primary_key=True)
title = sa.Column(sa.String(100))
table = ArticleTranslation.__table__
has_index(table.c.locale) # False
has_index(table.c.id) # True
Return whether or not given column has a unique index. A column has a unique index if it has a single column primary key index or it has a single column UniqueConstraint.
Parameters: | column – SQLAlchemy Column object |
---|
from sqlalchemy_utils import has_unique_index
class Article(Base):
__tablename__ = 'article'
id = sa.Column(sa.Integer, primary_key=True)
title = sa.Column(sa.String(100))
is_published = sa.Column(sa.Boolean, unique=True)
is_deleted = sa.Column(sa.Boolean)
is_archived = sa.Column(sa.Boolean)
table = Article.__table__
has_unique_index(table.c.is_published) # True
has_unique_index(table.c.is_deleted) # False
has_unique_index(table.c.id) # True
Raises TypeError: | |
---|---|
if given column does not belong to a Table object |
Convert python data structures to PostgreSQL specific SQLAlchemy JSON constructs. This function is extremly useful if you need to build PostgreSQL JSON on python side.
Note
This function needs PostgreSQL >= 9.4
Scalars are converted to to_json SQLAlchemy function objects
json_sql(1) # Equals SQL: to_json(1)
json_sql('a') # to_json('a')
Mappings are converted to json_build_object constructs
json_sql({'a': 'c', '2': 5}) # json_build_object('a', 'c', '2', 5)
Sequences (other than strings) are converted to json_build_array constructs
json_sql([1, 2, 3]) # json_build_array(1, 2, 3)
You can also nest these data structures
json_sql({'a': [1, 2, 3]})
# json_build_object('a', json_build_array[1, 2, 3])
Parameters: | value – value to be converted to SQLAlchemy PostgreSQL function constructs |
---|
Generate a SQL expression from the passed python expression.
Only the global variable, engine, is available for use in the expression. Additional local variables may be passed in the context parameter.
Note this function is meant for convenience and protected usage. Do NOT blindly pass user input to this function as it uses exec.
Parameters: |
|
---|
Generate an SQL expression string with bound parameters rendered inline for the given SQLAlchemy statement.
Parameters: |
|
---|