Skip to content

Add ORM and 4.3.0 updates #198

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 11, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions config.rb
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@
I18n.enforce_available_locales = false

# Latest versions
@latest_version = "4.2.1"
@latest_version = "4.3.0"
@latest_play_support_version = "3.0.0-scalikejdbc-4.2"
@v2_play_support_version = "2.5.1"
@v2_version = "2.5.2"
Expand All @@ -76,8 +76,8 @@
set :v1_version, @v1_version
set :v18_version, @v18_version
set :v1_latest_version, @v1_version
set :h2_version, "1.4.200"
set :logback_version, "1.2.12"
set :h2_version, "2.2.224"
set :logback_version, "1.5.6"

# Build-specific configuration
configure :build do
Expand Down
36 changes: 23 additions & 13 deletions source/documentation/auto-macros.html.md.erb
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,13 @@ title: Auto Macros - ScalikeJDBC
### Avoid Boilerplate Code
<hr/>

You can avoid writing boilerplate code when using `scalikejdbc-syntax-support-macro`.
If you want to avoid writing lots of boilerplate code, `scalikejdbc-syntax-support-macro` can be greatly helpful for reducing such troublesome coding tasks.

<hr/>
### Setup
<hr/>

Add the following aditional dependency to your sbt project.
In addition to the core library, you can add the following optional dependency in `buid.sbt`:

```scala
libraryDependencies += "org.scalikejdbc" %% "scalikejdbc-syntax-support-macro" % "<%= version %>"
Expand All @@ -26,10 +26,15 @@ libraryDependencies += "org.scalikejdbc" %% "scalikejdbc-syntax-support-macro" %

#### autoConstruct for extracting entities from ResultSet

Usually, we should write ResultSet extractor method as follows.
When you don't use the macros, the usual code to extract data from `ResultSet` should look like below:

```scala
case class Company(id: Long, name: String, countryId: Option[Long], country: Option[Country] = None)
case class Company(
id: Long,
name: String,
countryId: Option[Long],
country: Option[Country] = None
)

object Company extends SQLSyntaxSupport[Company] {

Expand All @@ -41,31 +46,36 @@ object Company extends SQLSyntaxSupport[Company] {
}
```

When using scalikejdbc-syntax-support-macro, you can use `#autoConstruct` macro.
When using scalikejdbc-syntax-support-macro, you can use `#autoConstruct` macro instead. As you can see, now the code is significantly simpler and much easier to maintain for the future.

```scala
case class Company(id: Long, name: String, countryId: Option[Long], country: Option[Country] = None)
case class Company(
id: Long,
name: String,
countryId: Option[Long],
// This property never comes from ResultSet
country: Option[Country] = None
)

object Company extends SQLSyntaxSupport[Company] {

def apply(rs: WrappedResultSet, rn: ResultName[Company]): Company =
autoConstruct(rs, rn, "country") // "country" will be ignored when binding values from ResultSet
// "country" is execluded when binding values from ResultSet
// Note that the property neeeds to have the default `None` value
autoConstruct(rs, rn, "country")
}
```

The `#autoConstruct` method binds all the fields defined at the primary constructor automatically.

The `country` field in the above example class should be ignored. In such cases, you should specify additional String parameter such as `"country"`.
Of course, the `"country"` will be verified at Scala compilation time. We believe that's pretty cool and useful.

The `country` field in the above example class should be ignored. In such cases, you should specify an additional String parameter such as "country". Of course, the "country" will be verified at Scala compilation time!

<hr/>
#### autoColumns to avoid accessing JDBC metadata

When your code load ScalikeJDBC DAO objects, ScalikeJDBC automatically fetches all the column names for the table specified by SQLSyntaxSupport's `tableName` (via JDBC metadata API).
When your code loads ScalikeJDBC DAO objects, ScalikeJDBC automatically fetches all the column names for the table specified by `SQLSyntaxSupport`'s `tableName` via the JDBC metadata API.

If you don't prefer the behavior, you can choose loading column names from the entity class's field names instead.
The following code won't access JDBC metadata and will resolve column names from Company class's fields, primary constructor's parameters, by simply converting them to snake-cased ones or applying nameConverters to them.
If you don't prefer the behavior, you can choose to load column names from the entity class's field names instead. The following code won't access JDBC metadata and will resolve column names from the `Company` class's fields and primary constructor's parameters by simply converting them to snake-cased ones or applying name converters to them.

```scala
case class Company(id: Long, name: String, countryId: Option[Long], country: Option[Country] = None)
Expand Down
25 changes: 9 additions & 16 deletions source/documentation/auto-session.html.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,7 @@ title: Auto Session - ScalikeJDBC
### Why AutoSession?
<hr/>

Basic usage of ScalikeJDBC is using `DB.autoCommit/readOnly/localTx/withinTx { ...}` blocks.

However, if you'd like to re-use methods, they might not be available.
Typically, ScalikeJDBC operations are encapsulated within `DB.autoCommit`, `DB.readOnly`, and other transaction blocks. For reusing methods across different transaction contexts as below,

```scala
def findById(id: Long) = DB readOnly {
Expand All @@ -19,9 +17,7 @@ def findById(id: Long) = DB readOnly {
}
```

When you use the above method in a transaction block, the code won't work as you expected.

The reason is that since `#findById(Long)` uses another session(=connection), it couldn't access uncommitted data.
`AutoSession` becomes essential. In a transaction block, this method may not perform as expected because it uses a separate session and cannot access uncommitted data. The reason is that since `#findById(Long)` uses another session(=connection), it couldn't access uncommitted data.

```scala
DB localTx { implicit session =>
Expand All @@ -30,15 +26,15 @@ DB localTx { implicit session =>
}
```

You need to change method's API to accept implicit parameters and now you don't need `DB` block inside the method.
With this change, the method can access the current transaction context. Instead of having `DB` blocks inside, your method can accept an implict DBSession paramter from external code to join an existing transactional session.

```scala
def findById(id: Long)(implicit session: DBSession) =
sql"select id, name from members where id = ${id}"
.map(rs => Member(rs)).single.apply()
```

This one works as expected.
With the change, the following code works as you expect.

```scala
DB localTx { implicit session =>
Expand All @@ -47,7 +43,7 @@ DB localTx { implicit session =>
}
```

But unfortunately, now we need to pass implicit parameter to `#findById` every time to use it.
But unfortunately, now that we need to pass implicit parameter to `#findById` every time to use the method, it could be troublesome especially for simple code snippets.

```scala
// now we cannot use this method directly
Expand All @@ -56,22 +52,22 @@ findById(id) // implicit parameter not found!
DB readOnly { implicit session => findById(id) }
```

`AutoSession` is a solution for this issue. Use `AutoSession` as default value of the implicit parameter.
`AutoSession` is a solution for the issue. You can have `AutoSession` as default value of the implicit parameter.

```scala
def findById(id: Long)(implicit session: DBSession = AutoSession) =
sql"select id, name from members where id = ${id}"
.map(rs => Member(rs)).single.apply()
```

This change made `#findById` flexible.
Having the default implement value can make `#findById` even more flexible plus much simpler.

```scala
findById(id) // borrows a read-only session and gives it back
DB localTx { implicit session => findById(id) } // using implicit session
```

If you do the same with `NamedDB`, use `NamedAutoSession` as follows.
When you do the same with `NamedDB`, you can use `NamedAutoSession` as below:

```scala
def findById(id: Long)(implicit session: DBSession = NamedAutoSession("named")) =
Expand All @@ -82,8 +78,5 @@ def findById(id: Long)(implicit session: DBSession = NamedAutoSession("named"))
### ReadOnlyAutoSession
<hr/>

Since version 1.7.4, `ReadOnlyAutoSession` and `NamedReadOnlyAutoSession` is also available.

These auto sessions disallow update/execute operations.

Since version 1.7.4, `ReadOnlyAutoSession` and `NamedReadOnlyAutoSession` is also available, which are tailored for read-only operations, preventing any update or execute operations.

35 changes: 16 additions & 19 deletions source/documentation/configuration.html.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,13 @@ title: Configuration - ScalikeJDBC
## Configuration

<hr/>
The following 3 things should be configured.
To use ScalikeJDBC, the following three factors need to be proplery configured.

<hr/>
### Loading JDBC Drivers
<hr/>

In advance, some JDBC drivers must be loaded by using
Before using JDBC drivers, they must be explicitly loaded using either:

```
Class.forName(String)
Expand All @@ -23,15 +23,13 @@ or
java.sql.DriverManager.registerDriver(java.sql.Driver)
```

However many modern JDBC implementations will be automatically loaded when they are present on the classpath.

If you use `scalikejdbc-config` or `scalikejdbc-play-plugin`, they do the legacy work for you.
Many modern JDBC drivers, however, automatically load themselves when included in the classpath. Nonetheless, when you're using `scalikejdbc-config` or `scalikejdbc-play-plugin`, these handle the above loading process for safety.

<hr/>
### Connection Pool Settings
<hr/>

ConnectionPool should be initialized when starting your applications.
It's required to initialize a ConnectionPool at the start of your applications:

```scala
import scalikejdbc._
Expand All @@ -50,7 +48,7 @@ val settings = ConnectionPoolSettings(
ConnectionPool.add("foo", url, user, password, settings)
```

When you use external DataSource (e.g. application server's connection pool), use javax.sql.DataSource via JNDI:
For using an external DataSource, such as an application server's connection pool, connect via JNDI:

```scala
import javax.naming._
Expand All @@ -64,7 +62,7 @@ ConnectionPool.singleton(new DataSourceConnectionPool(ds))
ConnectionPool.add("foo", new DataSourceConnectionPool(ds))
```

`ConnectionPool` and `ConnectionPoolSettings`'s parameters are like this:
Here's how `ConnectionPool` and `ConnectionPoolSettings` parameters look:

```scala
abstract class ConnectionPool(
Expand All @@ -82,14 +80,14 @@ case class ConnectionPoolSettings(
validationQuery: String)
```

FYI: [Source Code](https://github.com/scalikejdbc/scalikejdbc/blob/master/scalikejdbc-core/src/main/scala/scalikejdbc/ConnectionPool.scala)
Further details in the [source code](https://github.com/scalikejdbc/scalikejdbc/blob/master/scalikejdbc-core/src/main/scala/scalikejdbc/ConnectionPool.scala)


<hr/>
### Global Settings
<hr/>

Global settings for logging for query inspection and so on.
Configure global settings for SQL error logging, query inspection, and more:

```scala
object GlobalSettings {
Expand All @@ -102,17 +100,17 @@ object GlobalSettings {
}
```

FYI: [Source Code](https://github.com/scalikejdbc/scalikejdbc/blob/master/scalikejdbc-core/src/main/scala/scalikejdbc/GlobalSettings.scala)
Reference the [source code](https://github.com/scalikejdbc/scalikejdbc/blob/master/scalikejdbc-core/src/main/scala/scalikejdbc/GlobalSettings.scala) for more details.

<hr/>
### scalikejdbc-config
<hr/>

If you use `scalikejdbc-config` which is an easy-to-use configuration loader for ScalikeJDBC which reads typesafe config, configuration is much simple.
The `scalikejdbc-config` library simplifies the configuration process by utilizing Typesafe Config to read settings:

[Typesafe Config](https://github.com/lightbend/config)

If you'd like to setup `scalikejdbc-config`, see setup page.
To learn how to configure `scalikejdbc-config`, see setup page.

[/documentation/setup](/documentation/setup.html)

Expand Down Expand Up @@ -148,9 +146,9 @@ db.default.driver="org.postgresql.Driver"
db.default.url="jdbc:postgresql://localhost:5432/scalikejdbc"
```

After just calling `scalikejdbc.config.DBs.setupAll()`, Connection pools are prepared. `DBs.setup/DBs.setupAll` loads specified JDBC driver classes as well.
When setting up with `scalikejdbc.config.DBs.setupAll()`, the module automatically loads the specified JDBC drivers and prepares connection pools.

Note that due to the way JDBC works, these drivers are loaded globally for the entire JVM, and then a particular driver is selected from the global JVM list by locating the first which is able to handle the connection URL. This usually produces the expected behaviour anyway, unless you have multiple JDBC drivers in your classpath which handle the same URL (such as MySQL and MariaDB JDBC implementations, which both handle URLs of the form `jdbc:mysql:`). In these cases you may not get the implementation you are expecting, since the presence of JDBC packages in the classpath is, for many drivers, enough to have them registered globally.
DBC drivers, once loaded, are globally available to the entire Java Virtual Machine (JVM). The selection process for a specific driver from the global list typically targets the first one capable of managing the given connection URL. This approach generally yields the correct behavior, except when multiple drivers capable of handling the same URL type (such as MySQL and MariaDB drivers, both supporting `jdbc:mysql:` URLs) are present in the classpath. In such cases, the expected driver might not be used, as the mere presence of JDBC drivers on the classpath often leads to their global registration, irrespective of their intended use.

```scala
import scalikejdbc._
Expand Down Expand Up @@ -180,7 +178,7 @@ DBs.closeAll()
### scalikejdbc-config with Environment
<hr/>

It's also possible to add prefix(e.g. environment).
You can manage different configurations for multiple environments:

```
development.db.default.driver="org.h2.Driver"
Expand All @@ -199,8 +197,7 @@ prod {
}
}
```

Use `DBsWithEnv` instead of `DBs`.
To activate these settings, use `DBsWithEnv` instead of `DBs`.

```scala
DBsWithEnv("development").setupAll()
Expand All @@ -211,7 +208,7 @@ DBsWithEnv("prod").setup("sandbox")
### scalikejdbc-config for Global Settings
<hr/>

The following settings are available.
Global settings can be adjusted to log SQL errors, connection issues, and more:

```
# Global settings
Expand Down
Loading