We are using an oracle nclob for one of the columns in our database, while doing a "buffered: false" select of 8 million rows we would always run out of memory.
using (var connection = new OracleConnection(_databaseConfiguration.ConnectionString))
{
connection.Open();
var result = connection.Query<ClassWithNClob>(query, param, buffered: false);
foreach (var element in result)
{
//do something with element that has a nclob
}
}
We got around this by using the default OracleDataReader and when we wanted to access the nclob we wrapped it with a using statement. We would like to use Dapper for this as it would make the conversion much much simpler, but can't at this point.
using (var connection = new OracleConnection(_databaseConfiguration.ConnectionString))
{
connection.Open();
var command = new OracleCommand(query,connection);
using (OracleDataReader oraReader = oracleCommand.ExecuteReader())
{
while(oraReader.Read())
{
using(var blobStream = reader.GetOracleClob(2))
{
//something with bloblStream.Value
}
}
}
}
We believe this is a bug, but is there something we are just missing?
Dapper tries very hard not to need to know much about specific providers. I suspect in your case it is just doing to equivalent of:
obj.YourClobMember = reader.GetValue(8);
The problem here is that indeed, it will never be disposed by dapper. I would suggest that in this scenario, your best bet would be to have a cleanup method that you call from your loop, that checks whether the clob member is not null, and if so disposes is and assigns it to null. This could even be by making your row type IDisposable
.
There might also be something possible involving a custom type handler, but the fact that the obvious type here might be byte[]
makes it awkward, as that already has hard-nosed handling.