-
Notifications
You must be signed in to change notification settings - Fork 563
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
make_Fetch_Possible_And_Deterministic
with Arel.sql
#1319
Comments
This looks like a bug in the adapter from first glance. From scope :largest_byte_sizes, -> (limit) { from(order(byte_size: :desc).limit(limit).select(:byte_size)) }
...
Entry.largest_byte_sizes(samples).pick(Arel.sql("sum(byte_size), count(*), min(byte_size)")) Which expanded out is: Entry.from(order(byte_size: :desc).limit(samples).select(:byte_size)).pick(Arel.sql("sum(byte_size), count(*), min(byte_size)")) Since there is already an order in the query, I wouldn't have thought the default order ( |
@andyundso I had a quick look and was not able to recreate the same error that you were seeing. Would you be able to create a bug script that recreates your error? https://github.com/rails-sqlserver/activerecord-sqlserver-adapter/wiki/How-to-report-a-bug |
@aidanharan can do. I am also not sure if it is really a bug, as there are other cases (mostly failures in the minitest suite) where I run into the ORDER BY issue. |
@andyundso Could you try updating the Entry.largest_byte_sizes(samples).pick(Arel.sql("sum(byte_size), count(*), min(byte_size)")) To: Entry.largest_byte_sizes(samples).order(Arel.sql("count(*)")).pick(Arel.sql("sum(byte_size), count(*), min(byte_size)")) If it works then this is a known SQLServer/adapter limitation. If a query has an
|
I will give you feedback on the weekend - there are so many failing tests for Maybe I even check |
@aidanharan so for current_maximum = maximum(:id)
current_maximum.blank? ? 0 : current_maximum - minimum(:id) + 1 performance-wise, this code is worse than before. so I do no think it has any chance of getting accepted upstream. I likely provide a separate gem with a bunch of So I think we can close this issue. |
@andyundso I meant to comment earlier. I had a look and I think your initial suggestion might be best. Currently if an explicit ordering isn't given then an implicit ordering using the primary key is used. Instead an implicit ordering using the first useable projection could be used. This would be a change to the adapter but should hopefully mean that 'solid_cache' should work with minimal changes. I've worked on a PR but still trying to get all the adapters tests passing. Once that's done I'll run the 'solid_cache' tests to see what other changes might be needed. |
@aidanharan here would be a gist with the changes necessary to run the
if the projection stuff would work, it would definitely be a win. |
@andyundso The changes in #1322 allow the tests in |
solid_cache
uses a couple of queries that look like this:Performance-wise, it is the best approach since you can fetch multiple values within one query.
But since
pick
useslimit
under the hood, thesqlserver
adapter has to add anORDER BY
clause to useOFFSET
. This by default isORDER BY [id]
, which in the case above throws an error:I looked a bit into
make_Fetch_Possible_And_Deterministic
, where thisORDER BY
clause is added. In theory, it should be possible to find out which columns are referenced in theSELECT
statements, by looking intoo.cores.first.projections
. I think there are two cases: Either the projection is already anArel::Attributes::Attribute
or aString
.If it's a string, we would have to extract the column name in order to get a
Arel::Attributes::Attribute
from theArel::Table
. we could separate it by each,
, then check if any known column (using theschema_cache
) is mentioned, likely using a similar regex as inquery_requires_identity_insert?
where the identity columns are being searched.I am not sure if this is a smart approach. So I wanted to have your feedback before actually looking to implement this.
The text was updated successfully, but these errors were encountered: