module Sequel

  1. lib/sequel/extensions/_pretty_table.rb
  2. lib/sequel/extensions/arbitrary_servers.rb
  3. lib/sequel/extensions/columns_introspection.rb
  4. lib/sequel/extensions/connection_validator.rb
  5. lib/sequel/extensions/constraint_validations.rb
  6. lib/sequel/extensions/date_arithmetic.rb
  7. lib/sequel/extensions/empty_array_ignore_nulls.rb
  8. lib/sequel/extensions/error_sql.rb
  9. lib/sequel/extensions/eval_inspect.rb
  10. lib/sequel/extensions/filter_having.rb
  11. lib/sequel/extensions/from_block.rb
  12. lib/sequel/extensions/graph_each.rb
  13. lib/sequel/extensions/hash_aliases.rb
  14. lib/sequel/extensions/looser_typecasting.rb
  15. lib/sequel/extensions/meta_def.rb
  16. lib/sequel/extensions/migration.rb
  17. lib/sequel/extensions/mssql_emulate_lateral_with_apply.rb
  18. lib/sequel/extensions/named_timezones.rb
  19. lib/sequel/extensions/null_dataset.rb
  20. lib/sequel/extensions/pagination.rb
  21. lib/sequel/extensions/pg_array.rb
  22. lib/sequel/extensions/pg_array_ops.rb
  23. lib/sequel/extensions/pg_hstore.rb
  24. lib/sequel/extensions/pg_hstore_ops.rb
  25. lib/sequel/extensions/pg_inet.rb
  26. lib/sequel/extensions/pg_interval.rb
  27. lib/sequel/extensions/pg_json.rb
  28. lib/sequel/extensions/pg_json_ops.rb
  29. lib/sequel/extensions/pg_loose_count.rb
  30. lib/sequel/extensions/pg_range.rb
  31. lib/sequel/extensions/pg_range_ops.rb
  32. lib/sequel/extensions/pg_row.rb
  33. lib/sequel/extensions/pg_row_ops.rb
  34. lib/sequel/extensions/pg_static_cache_updater.rb
  35. lib/sequel/extensions/pretty_table.rb
  36. lib/sequel/extensions/query.rb
  37. lib/sequel/extensions/query_literals.rb
  38. lib/sequel/extensions/schema_caching.rb
  39. lib/sequel/extensions/schema_dumper.rb
  40. lib/sequel/extensions/select_remove.rb
  41. lib/sequel/extensions/sequel_3_dataset_methods.rb
  42. lib/sequel/extensions/server_block.rb
  43. lib/sequel/extensions/set_overrides.rb
  44. lib/sequel/extensions/split_array_nil.rb
  45. lib/sequel/extensions/thread_local_timezones.rb
  46. lib/sequel/extensions/to_dot.rb
  47. lib/sequel/plugins/active_model.rb
  48. lib/sequel/plugins/after_initialize.rb
  49. lib/sequel/plugins/association_autoreloading.rb
  50. lib/sequel/plugins/association_dependencies.rb
  51. lib/sequel/plugins/association_pks.rb
  52. lib/sequel/plugins/association_proxies.rb
  53. lib/sequel/plugins/auto_validations.rb
  54. lib/sequel/plugins/blacklist_security.rb
  55. lib/sequel/plugins/boolean_readers.rb
  56. lib/sequel/plugins/caching.rb
  57. lib/sequel/plugins/class_table_inheritance.rb
  58. lib/sequel/plugins/composition.rb
  59. lib/sequel/plugins/constraint_validations.rb
  60. lib/sequel/plugins/dataset_associations.rb
  61. lib/sequel/plugins/defaults_setter.rb
  62. lib/sequel/plugins/dirty.rb
  63. lib/sequel/plugins/eager_each.rb
  64. lib/sequel/plugins/error_splitter.rb
  65. lib/sequel/plugins/force_encoding.rb
  66. lib/sequel/plugins/hook_class_methods.rb
  67. lib/sequel/plugins/input_transformer.rb
  68. lib/sequel/plugins/instance_filters.rb
  69. lib/sequel/plugins/instance_hooks.rb
  70. lib/sequel/plugins/json_serializer.rb
  71. lib/sequel/plugins/lazy_attributes.rb
  72. lib/sequel/plugins/list.rb
  73. lib/sequel/plugins/many_through_many.rb
  74. lib/sequel/plugins/many_to_one_pk_lookup.rb
  75. lib/sequel/plugins/mssql_optimistic_locking.rb
  76. lib/sequel/plugins/nested_attributes.rb
  77. lib/sequel/plugins/optimistic_locking.rb
  78. lib/sequel/plugins/pg_array_associations.rb
  79. lib/sequel/plugins/pg_row.rb
  80. lib/sequel/plugins/pg_typecast_on_load.rb
  81. lib/sequel/plugins/prepared_statements.rb
  82. lib/sequel/plugins/prepared_statements_associations.rb
  83. lib/sequel/plugins/prepared_statements_safe.rb
  84. lib/sequel/plugins/prepared_statements_with_pk.rb
  85. lib/sequel/plugins/rcte_tree.rb
  86. lib/sequel/plugins/schema.rb
  87. lib/sequel/plugins/scissors.rb
  88. lib/sequel/plugins/serialization.rb
  89. lib/sequel/plugins/serialization_modification_detection.rb
  90. lib/sequel/plugins/sharding.rb
  91. lib/sequel/plugins/single_table_inheritance.rb
  92. lib/sequel/plugins/skip_create_refresh.rb
  93. lib/sequel/plugins/static_cache.rb
  94. lib/sequel/plugins/string_stripper.rb
  95. lib/sequel/plugins/subclasses.rb
  96. lib/sequel/plugins/table_select.rb
  97. lib/sequel/plugins/tactical_eager_loading.rb
  98. lib/sequel/plugins/timestamps.rb
  99. lib/sequel/plugins/touch.rb
  100. lib/sequel/plugins/tree.rb
  101. lib/sequel/plugins/typecast_on_load.rb
  102. lib/sequel/plugins/unlimited_update.rb
  103. lib/sequel/plugins/update_primary_key.rb
  104. lib/sequel/plugins/validation_class_methods.rb
  105. lib/sequel/plugins/validation_helpers.rb
  106. lib/sequel/plugins/xml_serializer.rb
  107. show all

This _pretty_table extension is only for internal use. It adds the Sequel::PrettyTable class without modifying Sequel::Dataset.

To load the extension:

Sequel.extension :_pretty_table

The arbitrary_servers extension allows you to connect to arbitrary servers/shards that were not defined when you created the database. To use it, you first load the extension into the Database object:

DB.extension :arbitrary_servers

Then you can pass arbitrary connection options for the server/shard to use as a hash:

DB[:table].server(:host=>'...', :database=>'...').all

Because Sequel can never be sure that the connection will be reused, arbitrary connections are disconnected as soon as the outermost block that uses them exits. So this example uses the same connection:

DB.transaction(:server=>{:host=>'...', :database=>'...'}) do |c|
  DB.transaction(:server=>{:host=>'...', :database=>'...'}) do |c2|
    # c == c2
  end
end

But this example does not:

DB.transaction(:server=>{:host=>'...', :database=>'...'}) do |c|
end
DB.transaction(:server=>{:host=>'...', :database=>'...'}) do |c2|
  # c != c2
end

You can use this extension in conjunction with the server_block extension:

DB.with_server(:host=>'...', :database=>'...') do
  DB.synchronize do
    # All of these use the host/database given to with_server
    DB[:table].insert(...)
    DB[:table].update(...)
    DB.tables
    DB[:table].all
  end
end

Anyone using this extension in conjunction with the server_block extension may want to do the following to so that you don’t need to call synchronize separately:

def DB.with_server(*)
  super{synchronize{yield}}
end

Note that this extension only works with the sharded threaded connection pool. If you are using the sharded single connection pool, you need to switch to the sharded threaded connection pool before using this extension.

The columns_introspection extension attempts to introspect the selected columns for a dataset before issuing a query. If it thinks it can guess correctly at the columns the query will use, it will return the columns without issuing a database query.

This method is not fool-proof, it’s possible that some databases will use column names that Sequel does not expect. Also, it may not correctly handle all cases.

To attempt to introspect columns for a single dataset:

ds = ds.extension(:columns_introspection)

To attempt to introspect columns for all datasets on a single database:

DB.extension(:columns_introspection)

The connection_validator extension modifies a database’s connection pool to validate that connections checked out from the pool are still valid, before yielding them for use. If it detects an invalid connection, it removes it from the pool and tries the next available connection, creating a new connection if no available connection is valid. Example of use:

DB.extension(:connection_validator)

As checking connections for validity involves issuing a query, which is potentially an expensive operation, the validation checks are only run if the connection has been idle for longer than a certain threshold. By default, that threshold is 3600 seconds (1 hour), but it can be modified by the user, set to -1 to always validate connections on checkout:

DB.pool.connection_validation_timeout = -1

Note that if you set the timeout to validate connections on every checkout, you should probably manually control connection checkouts on a coarse basis, using Database#synchronize. In a web application, the optimal place for that would be a rack middleware. Validating connections on every checkout without setting up coarse connection checkouts will hurt performance, in some cases significantly. Note that setting up coarse connection checkouts reduces the concurrency level acheivable. For example, in a web application, using Database#synchronize in a rack middleware will limit the number of concurrent web requests to the number to connections in the database connection pool.

Note that this extension only affects the default threaded and the sharded threaded connection pool. The single threaded and sharded single threaded connection pools are not affected. As the only reason to use the single threaded pools is for speed, and this extension makes the connection pool slower, there’s not much point in modifying this extension to work with the single threaded pools. The threaded pools work fine even in single threaded code, so if you are currently using a single threaded pool and want to use this extension, switch to using a threaded pool.

The constraint_validations extension is designed to easily create database constraints inside create_table and alter_table blocks. It also adds relevant metadata about the constraints to a separate table, which the constraint_validations model plugin uses to setup automatic validations.

To use this extension, you first need to load it into the database:

DB.extension(:constraint_validations)

Note that you should only need to do this when modifying the constraint validations (i.e. when migrating). You should probably not load this extension in general application code.

You also need to make sure to add the metadata table for the automatic validations. By default, this table is called sequel_constraint_validations.

DB.create_constraint_validations_table

This table should only be created once. For new applications, you generally want to create it first, before creating any other application tables.

Because migrations instance_eval the up and down blocks on a database, using this extension in a migration can be done via:

Sequel.migration do
  up do
    extension(:constraint_validations)
    # ...
  end
  down do
    extension(:constraint_validations)
    # ...
  end
end

However, note that you cannot use change migrations with this extension, you need to use separate up/down migrations.

The API for creating the constraints with automatic validations is similar to the validation_helpers model plugin API. However, instead of having separate validates_* methods, it just adds a validate method that accepts a block to the schema generators. Like the create_table and alter_table blocks, this block is instance_evaled and offers its own DSL. Example:

DB.create_table(:table) do
  Integer :id
  String :name
  validate do
    presence :id
    min_length 5, :name
  end
end

instance_eval is used in this case because create_table and alter_table already use instance_eval, so losing access to the surrounding receiver is not an issue.

Here’s a breakdown of the constraints created for each constraint validation method:

All constraints except unique unless :allow_nil is true

CHECK column IS NOT NULL

presence (String column)

CHECK trim(column) != ”

exact_length 5

CHECK char_length(column) = 5

min_length 5

CHECK char_length(column) >= 5

max_length 5

CHECK char_length(column) <= 5

length_range 3..5

CHECK char_length(column) >= 3 AND char_length(column) <= 5

length_range 3…5

CHECK char_length(column) >= 3 AND char_length(column) < 5

format /foo\d+/

CHECK column ~ ‘foo\d+’

format /foo\d+/i

CHECK column ~* ‘foo\d+’

like ‘foo%’

CHECK column LIKE ‘foo%’

ilike ‘foo%’

CHECK column ILIKE ‘foo%’

includes [‘a’, ‘b’]

CHECK column IN (‘a’, ‘b’)

includes [1, 2]

CHECK column IN (1, 2)

includes 3..5

CHECK column >= 3 AND column <= 5

includes 3…5

CHECK column >= 3 AND column < 5

unique

UNIQUE (column)

There are some additional API differences:

  • Only the :message and :allow_nil options are respected. The :allow_blank and :allow_missing options are not respected.

  • A new option, :name, is respected, for providing the name of the constraint. It is highly recommended that you provide a name for all constraint validations, as otherwise, it is difficult to drop the constraints later.

  • The includes validation only supports an array of strings, and array of integers, and a range of integers.

  • There are like and ilike validations, which are similar to the format validation but use a case sensitive or case insensitive LIKE pattern. LIKE patters are very simple, so many regexp patterns cannot be expressed by them, but only a couple databases (PostgreSQL and MySQL) support regexp patterns.

  • When using the unique validation, column names cannot have embedded commas. For similar reasons, when using an includes validation with an array of strings, none of the strings in the array can have embedded commas.

  • The unique validation does not support an arbitrary number of columns. For a single column, just the symbol should be used, and for an array of columns, an array of symbols should be used. There is no support for creating two separate unique validations for separate columns in a single call.

  • A drop method can be called with a constraint name in a alter_table validate block to drop an existing constraint and the related validation metadata.

  • While it is allowed to create a presence constraint with :allow_nil set to true, doing so does not create a constraint unless the column has String type.

Note that this extension has the following issues on certain databases:

  • MySQL does not support check constraints (they are parsed but ignored), so using this extension does not actually set up constraints on MySQL, except for the unique constraint. It can still be used on MySQL to add the validation metadata so that the plugin can setup automatic validations.

  • On SQLite, adding constraints to a table is not supported, so it must be emulated by dropping the table and recreating it with the constraints. If you want to use this plugin on SQLite with an alter_table block, you should drop all constraint validation metadata using drop_constraint_validations_for(:table=>'table'), and then readd all constraints you want to use inside the alter table block, making no other changes inside the alter_table block.

The date_arithmetic extension adds the ability to perform database-independent addition/substraction of intervals to/from dates and timestamps.

First, you need to load the extension into the database:

DB.extension :date_arithmetic

Then you can use the Sequel.date_add and Sequel.date_sub methods to return Sequel expressions:

add = Sequel.date_add(:date_column, :years=>1, :months=>2, :days=>3)
sub = Sequel.date_sub(:date_column, :hours=>1, :minutes=>2, :seconds=>3)

In addition to specifying the interval as a hash, there is also support for specifying the interval as an ActiveSupport::Duration object:

require 'active_support/all'
add = Sequel.date_add(:date_column, 1.years + 2.months + 3.days)
sub = Sequel.date_sub(:date_column, 1.hours + 2.minutes + 3.seconds)

These expressions can be used in your datasets, or anywhere else that Sequel expressions are allowed:

DB[:table].select(add.as(:d)).where(sub > Sequel::CURRENT_TIMESTAMP)

This changes Sequel’s literalization of IN/NOT IN with an empty array value to not return NULL even if one of the referenced columns is NULL:

DB[:test].where(:name=>[])
# SELECT * FROM test WHERE (1 = 0)
DB[:test].exclude(:name=>[])
# SELECT * FROM test WHERE (1 = 1)

The default Sequel behavior is to respect NULLs, so that when name is NULL, the expression returns NULL.

You can load this extension into specific datasets:

ds = DB[:table]
ds = ds.extension(:empty_array_ignore_nulls)

Or you can load it into all of a database’s datasets, which is probably the desired behavior if you are using this extension:

DB.extension(:empty_array_ignore_nulls)

The error_sql extension adds a Sequel::DatabaseError#sql method that you can use to get the sql that caused the error to be raised.

begin
  DB.run "Invalid SQL"
rescue => e
  puts e.sql # "Invalid SQL"
end

On some databases, the error message contains part or all of the SQL used, but on other databases, none of the SQL used is displayed in the error message, so it can be difficult to track down what is causing the error without using a logger. This extension should hopefully make debugging easier on databases that have bad error messages.

This extension may not work correctly in the following cases:

  • log_yield is not used when executing the query.

  • The underlying exception is frozen or reused.

  • The underlying exception doesn’t correctly record instance variables set on it (seems to happen on JRuby when underlying exception objects are Java exceptions).

To load the extension into the database:

DB.extension :error_sql

The eval_inspect extension changes inspect for Sequel::SQL::Expression subclasses to return a string suitable for ruby’s eval, such that

eval(obj.inspect) == obj

is true. The above code is true for most of ruby’s simple classes such as String, Integer, Float, and Symbol, but it’s not true for classes such as Time, Date, and BigDecimal. Sequel attempts to handle situations where instances of these classes are a component of a Sequel expression.

To load the extension:

Sequel.extension :eval_inspect

The filter_having extension allows Dataset#filter, and, or and exclude to operate on the HAVING clause if the dataset already has a HAVING clause, which was the historical behavior before Sequel 4. It is only recommended to use this for backwards compatibility.

You can load this extension into specific datasets:

ds = DB[:table]
ds = ds.extension(:filter_having)

Or you can load it into all of a database’s datasets, which is probably the desired behavior if you are using this extension:

DB.extension(:filter_having)

The from_block extension changes Database#from so that blocks given to it are treated as virtual rows applying to the FROM clause, instead of virtual rows applying to the WHERE clause. This will probably be made the default in the next major version of Sequel.

This makes it easier to use table returning functions:

DB.from{table_function(1)}
# SELECT * FROM table_function(1)

To load the extension into the database:

DB.extension :from_block

The graph_each extension adds Dataset#graph_each and makes Dataset#each call graph_each if the dataset has been graphed. Dataset#graph_each splits result hashes into subhashes per table:

DB[:a].graph(:b, :id=>:b_id).all
# => {:a=>{:id=>1, :b_id=>2}, :b=>{:id=>2}}

You can load this extension into specific datasets:

ds = DB[:table]
ds = ds.extension(:graph_each)

Or you can load it into all of a database’s datasets, which is probably the desired behavior if you are using this extension:

DB.extension(:graph_each)

The hash_aliases extension allows Dataset#select and Dataset#from to treat a hash argument as an alias specification, with keys being the expressions and values being the aliases, which was the historical behavior before Sequel 4. It is only recommended to use this for backwards compatibility.

You can load this extension into specific datasets:

ds = DB[:table]
ds = ds.extension(:hash_aliases)

Or you can load it into all of a database’s datasets, which is probably the desired behavior if you are using this extension:

DB.extension(:hash_aliases)

The LooserTypecasting extension loosens the default database typecasting for the following types:

:float

use to_f instead of Float()

:integer

use to_i instead of Integer()

:decimal

don’t check string conversion with Float()

:string

silently allow hash and array conversion to string

To load the extension into the database:

DB.extension :looser_typecasting

The meta_def extension is designed for backwards compatibility with older Sequel code that uses the meta_def method on Database, Dataset, and Model classes and/or instances. It is not recommended for usage in new code. To load this extension:

Sequel.extension :meta_def

Adds the Sequel::Migration and Sequel::Migrator classes, which allow the user to easily group schema changes and migrate the database to a newer version or revert to a previous version.

To load the extension:

Sequel.extension :migration

The mssql_emulate_lateral_with_apply extension converts queries that use LATERAL into queries that use CROSS/OUTER APPLY, allowing code that works on databases that support LATERAL via Dataset#lateral to run on Microsoft SQL Server and Sybase SQLAnywhere.

This is available as a separate extension instead of integrated into the Microsoft SQL Server and Sybase SQLAnywhere support because few people need it and there is a performance hit to code that doesn’t use it.

It is possible there are cases where this emulation does not work. Users should probably verify that correct results are returned when using this extension.

You can load this extension into specific datasets:

ds = DB[:table]
ds = ds.extension(:mssql_emulate_lateral_with_apply)

Or you can load it into all of a database’s datasets:

DB.extension(:mssql_emulate_lateral_with_apply)

The null_dataset extension adds the Dataset#nullify method, which returns a cloned dataset that will never issue a query to the database. It implements the null object pattern for datasets.

To load the extension:

Sequel.extension :null_dataset

The most common usage is probably in a method that must return a dataset, where the method knows the dataset shouldn’t return anything. With standard Sequel, you’d probably just add a WHERE condition that is always false, but that still results in a query being sent to the database, and can be overridden using unfiltered, the OR operator, or a UNION.

Usage:

ds = DB[:items].nullify.where(:a=>:b).select(:c)
ds.sql # => "SELECT c FROM items WHERE (a = b)"
ds.all # => [] # no query sent to the database

Note that there is one case where a null dataset will sent a query to the database. If you call columns on a nulled dataset and the dataset doesn’t have an already cached version of the columns, it will create a new dataset with the same options to get the columns.

This extension uses Object#extend at runtime, which can hurt performance.

The pagination extension adds the Sequel::Dataset#paginate and each_page methods, which return paginated (limited and offset) datasets with some helpful methods that make creating a paginated display easier.

This extension uses Object#extend at runtime, which can hurt performance.

You can load this extension into specific datasets:

ds = DB[:table]
ds = ds.extension(:pagination)

Or you can load it into all of a database’s datasets, which is probably the desired behavior if you are using this extension:

DB.extension(:pagination)

The pg_array_ops extension adds support to Sequel’s DSL to make it easier to call PostgreSQL array functions and operators.

To load the extension:

Sequel.extension :pg_array_ops

The most common usage is passing an expression to Sequel.pg_array_op:

ia = Sequel.pg_array_op(:int_array_column)

If you have also loaded the pg_array extension, you can use Sequel.pg_array as well:

ia = Sequel.pg_array(:int_array_column)

Also, on most Sequel expression objects, you can call the pg_array method:

ia = Sequel.expr(:int_array_column).pg_array

If you have loaded the core_extensions extension or you have loaded the core_refinements extension and have activated refinements for the file, you can also use Sequel::Postgres::ArrayOpMethods#pg_array:

ia = :int_array_column.pg_array

This creates a Sequel::Postgres::ArrayOp object that can be used for easier querying:

ia[1]     # int_array_column[1]
ia[1][2]  # int_array_column[1][2]
ia.contains(:other_int_array_column)     # @> 
ia.contained_by(:other_int_array_column) # <@
ia.overlaps(:other_int_array_column)     # &&
ia.concat(:other_int_array_column)       # ||

ia.push(1)         # int_array_column || 1
ia.unshift(1)      # 1 || int_array_column

ia.any             # ANY(int_array_column)
ia.all             # ALL(int_array_column)
ia.dims            # array_dims(int_array_column)
ia.length          # array_length(int_array_column, 1)
ia.length(2)       # array_length(int_array_column, 2)
ia.lower           # array_lower(int_array_column, 1)
ia.lower(2)        # array_lower(int_array_column, 2)
ia.join            # array_to_string(int_array_column, '', NULL)
ia.join(':')       # array_to_string(int_array_column, ':', NULL)
ia.join(':', ' ')  # array_to_string(int_array_column, ':', ' ')
ia.unnest          # unnest(int_array_column)

See the PostgreSQL array function and operator documentation for more details on what these functions and operators do.

If you are also using the pg_array extension, you should load it before loading this extension. Doing so will allow you to use PGArray#op to get an ArrayOp, allowing you to perform array operations on array literals.

The pg_hstore_ops extension adds support to Sequel’s DSL to make it easier to call PostgreSQL hstore functions and operators.

To load the extension:

Sequel.extension :pg_hstore_ops

The most common usage is taking an object that represents an SQL expression (such as a :symbol), and calling Sequel.hstore_op with it:

h = Sequel.hstore_op(:hstore_column)

If you have also loaded the pg_hstore extension, you can use Sequel.hstore as well:

h = Sequel.hstore(:hstore_column)

Also, on most Sequel expression objects, you can call the hstore method:

h = Sequel.expr(:hstore_column).hstore

If you have loaded the core_extensions extension or you have loaded the core_refinements extension and have activated refinements for the file, you can also use Sequel::Postgres::HStoreOpMethods#hstore:

h = :hstore_column.hstore

This creates a Sequel::Postgres::HStoreOp object that can be used for easier querying:

h - 'a'    # hstore_column - CAST('a' AS text)
h['a']     # hstore_column -> 'a'
h.concat(:other_hstore_column)       # ||
h.has_key?('a')                      # ?
h.contain_all(:array_column)         # ?&
h.contain_any(:array_column)         # ?|
h.contains(:other_hstore_column)     # @> 
h.contained_by(:other_hstore_column) # <@

h.defined        # defined(hstore_column)
h.delete('a')    # delete(hstore_column, 'a')
h.each           # each(hstore_column)
h.keys           # akeys(hstore_column)
h.populate(:a)   # populate_record(a, hstore_column)
h.record_set(:a) # (a #= hstore_column)
h.skeys          # skeys(hstore_column)
h.slice(:a)      # slice(hstore_column, a)
h.svals          # svals(hstore_column)
h.to_array       # hstore_to_array(hstore_column)
h.to_matrix      # hstore_to_matrix(hstore_column)
h.values         # avals(hstore_column)

See the PostgreSQL hstore function and operator documentation for more details on what these functions and operators do.

If you are also using the pg_hstore extension, you should load it before loading this extension. Doing so will allow you to use HStore#op to get an HStoreOp, allowing you to perform hstore operations on hstore literals.

The pg_json_ops extension adds support to Sequel’s DSL to make it easier to call PostgreSQL JSON functions and operators (added first in PostgreSQL 9.3).

To load the extension:

Sequel.extension :pg_json_ops

The most common usage is passing an expression to Sequel.pg_json_op:

j = Sequel.pg_json_op(:json_column)

If you have also loaded the pg_json extension, you can use Sequel.pg_json as well:

j = Sequel.pg_json(:json_column)

Also, on most Sequel expression objects, you can call the pg_json method:

j = Sequel.expr(:json_column).pg_json

If you have loaded the core_extensions extension or you have loaded the core_refinements extension and have activated refinements for the file, you can also use Sequel::Postgres::JSONOpMethods#pg_json:

j = :json_column.pg_json

This creates a Sequel::Postgres::JSONOp object that can be used for easier querying:

j[1]                     # (json_column -> 1)
j[%w'a b']               # (json_column #> ARRAY['a','b'])
j.get_text(1)            # (json_column ->> 1)
j.get_text(%w'a b')      # (json_column #>> ARRAY['a','b'])
j.extract('a', 'b')      # json_extract_path(json_column, 'a', 'b')
j.extract_text('a', 'b') # json_extract_path_text(json_column, 'a', 'b')
j.array_length           # json_array_length(json_column)
j.array_elements         # json_array_elements(json_column)
j.each                   # json_each(json_column)
j.each_text              # json_each_text(json_column)
j.keys                   # json_object_keys(json_column)

j.populate(:a)           # json_populate_record(:a, json_column)
j.populate_set(:a)       # json_populate_recordset(:a, json_column)

If you are also using the pg_json extension, you should load it before loading this extension. Doing so will allow you to use JSONHash#op and JSONArray#op to get a JSONOp, allowing you to perform json operations on json literals.

The pg_loose_count extension looks at the table statistics in the PostgreSQL system tables to get a fast approximate count of the number of rows in a given table:

DB.loose_count(:table) # => 123456

It can also support schema qualified tables:

DB.loose_count(:schema__table) # => 123456

How accurate this count is depends on the number of rows added/deleted from the table since the last time it was analyzed.

To load the extension into the database:

DB.extension :pg_loose_count

The pg_range_ops extension adds support to Sequel’s DSL to make it easier to call PostgreSQL range functions and operators.

To load the extension:

Sequel.extension :pg_range_ops

The most common usage is passing an expression to Sequel.pg_range_op:

r = Sequel.pg_range_op(:range)

If you have also loaded the pg_range extension, you can use Sequel.pg_range as well:

r = Sequel.pg_range(:range)

Also, on most Sequel expression objects, you can call the pg_range method:

r = Sequel.expr(:range).pg_range

If you have loaded the core_extensions extension or you have loaded the core_refinements extension and have activated refinements for the file, you can also use Sequel::Postgres::RangeOpMethods#pg_range:

r = :range.pg_range

This creates a Sequel::Postgres::RangeOp object that can be used for easier querying:

r.contains(:other)      # range @> other
r.contained_by(:other)  # range <@ other
r.overlaps(:other)      # range && other
r.left_of(:other)       # range << other
r.right_of(:other)      # range >> other
r.starts_after(:other)  # range &> other
r.ends_before(:other)   # range &< other
r.adjacent_to(:other)   # range -|- other
r.lower            # lower(range)
r.upper            # upper(range)
r.isempty          # isempty(range)
r.lower_inc        # lower_inc(range)
r.upper_inc        # upper_inc(range)
r.lower_inf        # lower_inf(range)
r.upper_inf        # upper_inf(range)

See the PostgreSQL range function and operator documentation for more details on what these functions and operators do.

If you are also using the pg_range extension, you should load it before loading this extension. Doing so will allow you to use PGArray#op to get an RangeOp, allowing you to perform range operations on range literals.

The pg_row_ops extension adds support to Sequel’s DSL to make it easier to deal with PostgreSQL row-valued/composite types.

To load the extension:

Sequel.extension :pg_row_ops

The most common usage is passing an expression to Sequel.pg_row_op:

r = Sequel.pg_row_op(:row_column)

If you have also loaded the pg_row extension, you can use Sequel.pg_row as well:

r = Sequel.pg_row(:row_column)

Also, on most Sequel expression objects, you can call the pg_row method:

r = Sequel.expr(:row_column).pg_row

If you have loaded the core_extensions extension or you have loaded the core_refinements extension and have activated refinements for the file, you can also use Sequel::Postgres::PGRowOp::ExpressionMethods#pg_row:

r = :row_column.pg_row

There’s only fairly basic support currently. You can use the [] method to access a member of the composite type:

r[:a] # (row_column).a

This can be chained:

r[:a][:b] # ((row_column).a).b

If you’ve loaded the pg_array_ops extension, you there is also support for composite types that include arrays, or arrays of composite types:

r[1][:a] # (row_column[1]).a
r[:a][1] # (row_column).a[1]

The only other support is the splat method:

r.splat # (row_column.*)

The splat method is necessary if you are trying to reference a table’s type when the table has the same name as one of it’s columns. For example:

DB.create_table(:a){Integer :a; Integer :b}

Let’s say you want to reference the composite type for the table:

a = Sequel.pg_row_op(:a)
DB[:a].select(a[:b]) # SELECT (a).b FROM a

Unfortunately, that doesn’t work, as it references the integer column, not the table. The splat method works around this:

DB[:a].select(a.splat[:b]) # SELECT (a.*).b FROM a

Splat also takes an argument which is used for casting. This is necessary if you want to return the composite type itself, instead of the columns in the composite type. For example:

DB[:a].select(a.splat).first # SELECT (a.*) FROM a
# => {:a=>1, :b=>2}

By casting the expression, you can get a composite type returned:

DB[:a].select(a.splat).first # SELECT (a.*)::a FROM a
# => {:a=>"(1,2)"} # or {:a=>{:a=>1, :b=>2}} if the "a" type has been registered
                   # with the pg_row extension

This feature is mostly useful for a different way to graph tables:

DB[:a].join(:b, :id=>:b_id).select(Sequel.pg_row_op(:a).splat(:a),
                                   Sequel.pg_row_op(:b).splat(:b))
# SELECT (a.*)::a, (b.*)::b FROM a INNER JOIN b ON (b.id = a.b_id)
# => {:a=>{:id=>1, :b_id=>2}, :b=>{:id=>2}}

The pg_static_cache_updater extension is designed to automatically update the caches in the models using the static_cache plugin when changes to the underlying tables are detected.

Before using the extension in production, you have to add triggers to the tables for the classes where you want the caches updated automatically. You would generally do this during a migration:

Sequel.migration do
  up do
    extension :pg_static_cache_updater
    create_static_cache_update_function
    create_static_cache_update_trigger(:table_1)
    create_static_cache_update_trigger(:table_2)
  end
  down do
    extension :pg_static_cache_updater
    drop_trigger(:table_2, default_static_cache_update_name)
    drop_trigger(:table_1, default_static_cache_update_name)
    drop_function(default_static_cache_update_name)
  end
end

After the triggers have been added, in your application process, after setting up your models, you need to listen for changes to the underlying tables:

class Model1 < Sequel::Model(:table_1)
  plugin :static_cache
end
class Model2 < Sequel::Model(:table_2)
  plugin :static_cache
end

DB.extension :pg_static_cache_updater
DB.listen_for_static_cache_updates([Model1, Model2])

When an INSERT/UPDATE/DELETE happens on the underlying table, the trigger will send a notification with the table’s OID. The application(s) listening on that channel will receive the notification, check the oid to see if it matches one for the model tables it is interested in, and tell that model to reload the cache if there is a match.

Note that listen_for_static_cache_updates spawns a new thread which will reserve its own database connection. This thread runs until the application process is shutdown.

Also note that PostgreSQL does not send notifications to channels until after the transaction including the changes is committed. Also, because a separate thread is used to listen for notifications, there may be a slight delay between when the transaction is committed and when the cache is reloaded.

Requirements:

  • PostgreSQL 9.0+

  • Listening Database object must be using the postgres adapter with the pg driver (the model classes do not have to use the same Database).

  • Must be using a thread-safe connection pool (the default).

The pretty_table extension adds Sequel::Dataset#print and the Sequel::PrettyTable class for creating nice-looking plain-text tables. Example:

+--+-------+
|id|name   |
|--+-------|
|1 |fasdfas|
|2 |test   |
+--+-------+

You can load this extension into specific datasets:

ds = DB[:table]
ds = ds.extension(:pretty_table)

Or you can load it into all of a database’s datasets, which is probably the desired behavior if you are using this extension:

DB.extension(:pretty_table)

The query extension adds Sequel::Dataset#query which allows a different way to construct queries instead of the usual method chaining. See Sequel::Dataset#query for details.

You can load this extension into specific datasets:

ds = DB[:table]
ds = ds.extension(:query)

Or you can load it into all of a database’s datasets, which is probably the desired behavior if you are using this extension:

DB.extension(:query)

The query_literals extension changes Sequel’s default behavior of the select, order and group methods so that if the first argument is a regular string, it is treated as a literal string, with the rest of the arguments (if any) treated as placeholder values. This allows you to write code such as:

DB[:table].select('a, b, ?', 2).group('a, b').order('c')

The default Sequel behavior would literalize that as:

SELECT 'a, b, ?', 2 FROM table GROUP BY 'a, b' ORDER BY 'c'

Using this extension changes the literalization to:

SELECT a, b, 2, FROM table GROUP BY a, b ORDER BY c

This extension makes select, group, and order methods operate like filter methods, which support the same interface.

There are very few places where Sequel’s default behavior is desirable in this area, but for backwards compatibility, the defaults won’t be changed until the next major release.

You can load this extension into specific datasets:

ds = DB[:table]
ds = ds.extension(:query_literals)

Or you can load it into all of a database’s datasets, which is probably the desired behavior if you are using this extension:

DB.extension(:query_literals)

The schema_caching extension adds a few methods to Sequel::Database that make it easy to dump the parsed schema information to a file, and load it from that file. Loading the schema information from a dumped file is faster than parsing it from the database, so this can save bootup time for applications with large numbers of models.

Basic usage in application code:

DB = Sequel.connect('...')
DB.extension :schema_caching
DB.load_schema_cache('/path/to/schema.dump')
# load model files

Then, whenever the database schema is modified, write a new cached file. You can do that with bin/sequel‘s -S option:

bin/sequel -S /path/to/schema.dump postgres://...

Alternatively, if you don’t want to dump the schema information for all tables, and you don’t worry about race conditions, you can choose to use the following in your application code:

DB = Sequel.connect('...')
DB.extension :schema_caching
DB.load_schema_cache?('/path/to/schema.dump')
# load model files

DB.dump_schema_cache?('/path/to/schema.dump')

With this method, you just have to delete the schema dump file if the schema is modified, and the application will recreate it for you using just the tables that your models use.

Note that it is up to the application to ensure that the dumped cached schema reflects the current state of the database. Sequel does no checking to ensure this, as checking would take time and the purpose of this code is to take a shortcut.

The cached schema is dumped in Marshal format, since it is the fastest and it handles all ruby objects used in the schema hash. Because of this, you should not attempt to load the schema from a untrusted file.

The select_remove extension adds Sequel::Dataset#select_remove for removing existing selected columns from a dataset. It’s not part of Sequel core as it is rarely needed and has some corner cases where it can’t work correctly.

You can load this extension into specific datasets:

ds = DB[:table]
ds = ds.extension(:select_remove)

Or you can load it into all of a database’s datasets, which is probably the desired behavior if you are using this extension:

DB.extension(:select_remove)

This adds the following dataset methods:

[]=

filter with the first argument, update with the second

insert_multiple

insert multiple rows at once

set

alias for update

to_csv

return string in csv format for the dataset

db=

change the dataset’s database

opts=

change the dataset’s opts

It is only recommended to use this for backwards compatibility.

You can load this extension into specific datasets:

ds = DB[:table]
ds = ds.extension(:sequel_3_dataset_methods)

Or you can load it into all of a database’s datasets, which is probably the desired behavior if you are using this extension:

DB.extension(:sequel_3_dataset_methods)

The server_block extension adds the Database#with_server method, which takes a shard argument and a block, and makes it so that access inside the block will use the specified shard by default.

First, you need to enable it on the database object:

DB.extension :server_block

Then you can call with_server:

DB.with_server(:shard1) do
  DB[:a].all # Uses shard1
  DB[:a].server(:shard2).all # Uses shard2
end
DB[:a].all # Uses default

You can even nest calls to with_server:

DB.with_server(:shard1) do
  DB[:a].all # Uses shard1
  DB.with_server(:shard2) do
    DB[:a].all # Uses shard2
  end
  DB[:a].all # Uses shard1
end
DB[:a].all # Uses default

Note that if you pass the nil, :default, or :read_only server/shard names to Dataset#server inside a with_server block, they will be ignored and the server/shard given to with_server will be used:

DB.with_server(:shard1) do
  DB[:a].all # Uses shard1
  DB[:a].server(:shard2).all # Uses shard2
  DB[:a].server(nil).all # Uses shard1
  DB[:a].server(:default).all # Uses shard1
  DB[:a].server(:read_only).all # Uses shard1
end

The set_overrides extension adds the Dataset#set_overrides and Dataset#set_defaults methods which provide a crude way to control the values used in INSERT/UPDATE statements if a hash of values is passed to Dataset#insert or Dataset#update. It is only recommended to use this for backwards compatibility.

You can load this extension into specific datasets:

ds = DB[:table]
ds = ds.extension(:set_overrides)

Or you can load it into all of a database’s datasets, which is probably the desired behavior if you are using this extension:

DB.extension(:set_overrides)

The split_array_nil extension overrides Sequel’s default handling of IN/NOT IN with arrays of values to do specific nil checking. For example,

ds = DB[:table].where(:column=>[1, nil])

By default, that produces the following SQL:

SELECT * FROM table WHERE (column IN (1, NULL))

However, because NULL = NULL is not true in SQL (it is NULL), this will not return rows in the table where the column is NULL. This extension allows for an alternative behavior more similar to ruby, which will return rows in the table where the column is NULL, using a query like:

SELECT * FROM table WHERE ((column IN (1)) OR (column IS NULL)))

Similarly, for NOT IN queries:

ds = DB[:table].exclude(:column=>[1, nil])
# Default:
#   SELECT * FROM table WHERE (column NOT IN (1, NULL))
# with split_array_nils extension:
#   SELECT * FROM table WHERE ((column NOT IN (1)) AND (column IS NOT NULL)))

To use this extension with a single dataset:

ds = ds.extension(:split_array_nil)

To use this extension for all of a database’s datasets:

DB.extension(:split_array_nil)

The thread_local_timezones extension allows you to set a per-thread timezone that will override the default global timezone while the thread is executing. The main use case is for web applications that execute each request in its own thread, and want to set the timezones based on the request.

To load the extension:

Sequel.extension :thread_local_timezones

The most common example is having the database always store time in UTC, but have the application deal with the timezone of the current user. That can be done with:

Sequel.database_timezone = :utc
# In each thread:
Sequel.thread_application_timezone = current_user.timezone

This extension is designed to work with the named_timezones extension.

This extension adds the thread_application_timezone=, thread_database_timezone=, and thread_typecast_timezone= methods to the Sequel module. It overrides the application_timezone, database_timezone, and typecast_timezone methods to check the related thread local timezone first, and use it if present. If the related thread local timezone is not present, it falls back to the default global timezone.

There is one special case of note. If you have a default global timezone and you want to have a nil thread local timezone, you have to set the thread local value to :nil instead of nil:

Sequel.application_timezone = :utc
Sequel.thread_application_timezone = nil
Sequel.application_timezone # => :utc
Sequel.thread_application_timezone = :nil
Sequel.application_timezone # => nil

This adds a Sequel::Dataset#to_dot method. The to_dot method returns a string that can be processed by graphviz’s dot program in order to get a visualization of the dataset. Basically, it shows a version of the dataset’s abstract syntax tree.

To load the extension:

Sequel.extension :to_dot

Methods

Public Class

  1. core_extensions?
  2. migration

Public Class methods

core_extensions? ()

This extension loads the core extensions.

[show source]
# File lib/sequel/extensions/core_extensions.rb, line 13
def Sequel.core_extensions?
  true
end
migration (&block)

The preferred method for writing Sequel migrations, using a DSL:

Sequel.migration do
  up do
    create_table(:artists) do
      primary_key :id
      String :name
    end
  end
  down do
    drop_table(:artists)
  end
end

Designed to be used with the Migrator class, part of the migration extension.

[show source]
# File lib/sequel/extensions/migration.rb, line 280
def self.migration(&block)
  MigrationDSL.create(&block)
end