Monday, March 19, 2012
How can you prevent the data changes message during editing a row
linked to SQL server 2000 via ODBC tables links from an Access 2000 front
end.
By doing so the standard optimistic locking of SQL server 2000 comes into
force and the option "lock edited record" on the Access Client gets ignored.
I cannot work with optimistic locking because if e.g. an order gets entered
in a table and e.g. a stock allocation job runs in the background that
updates the order header,
all the entered data will be lost (or must be copied to the clipboard and
edited ...).
Do you know how to overcome this problem - problems like this should
actually be quite common ...
Several years ago I used to work with a database called DataFlex.
This database had the best locking mechanism I every have experienced.
If one enters dat in a form the pulled record does not get locked (like
optimistic locking).
It only gets locked shortly before the update - it workes like this:
read a record from the table and display in the form
user modifies data in form
user clicks update button
record gets locked and reread into the record buffer
client program compares every form field with the record buffer if any data
was changed
if a field was changed the record buffer was overwritten with the data the
user entered
the record gets saved and unlocked
This way only the changed data fields get updated and changes of other users
do not get overwritten!
Is there a way on SQL server to do something similar or how do you deal with
this problem?
I also thought of writting an INSTEAD OF UPDATE trigger and check if any
column is different from the Inserted to the "REREAD" record buffer and appl
y
the above mentioned logic of DataFlex but this streches my SQL knowledge jus
t
a bit too far ...
May be you can help?
Any comments are appreciated!
Thanks in advance.
OliverBracket your user actions between BEGIN TRANS...COMMIT TRANS and to an
UPDATE WITH(ROWLOCK) setting a dateModified field to getDate()
immediately after BEGIN. All subsequent modifications will be queued
until you fire your update and commit. You could also do a check on the
dateModified field and simply warn the user if it has incremented since
they read the record, but doing a field-by-field comparison is more
work than is necessary.
The warning is almost always the preferable solution, since it is
dangerous to assume that such a collision should proceed in whichever
more or less arbitrary order when human intervention is not only
possible, but desirable.|||Oliver wrote:
> I just upsized my Access application (mail-order processing system)
> and linked to SQL server 2000 via ODBC tables links from an Access
> 2000 front end.
> By doing so the standard optimistic locking of SQL server 2000 comes
> into force and the option "lock edited record" on the Access Client
> gets ignored. I cannot work with optimistic locking because if e.g.
> an order gets entered in a table and e.g. a stock allocation job runs
> in the background that updates the order header,
> all the entered data will be lost (or must be copied to the clipboard
> and edited ...).
> Do you know how to overcome this problem - problems like this should
> actually be quite common ...
> Several years ago I used to work with a database called DataFlex.
> This database had the best locking mechanism I every have experienced.
> If one enters dat in a form the pulled record does not get locked
> (like optimistic locking).
> It only gets locked shortly before the update - it workes like this:
> read a record from the table and display in the form
> user modifies data in form
> user clicks update button
> record gets locked and reread into the record buffer
> client program compares every form field with the record buffer if
> any data was changed
> if a field was changed the record buffer was overwritten with the
> data the user entered
> the record gets saved and unlocked
> This way only the changed data fields get updated and changes of
> other users do not get overwritten!
> Is there a way on SQL server to do something similar or how do you
> deal with this problem?
> I also thought of writting an INSTEAD OF UPDATE trigger and check if
> any column is different from the Inserted to the "REREAD" record
> buffer and apply the above mentioned logic of DataFlex but this
> streches my SQL knowledge just a bit too far ...
> May be you can help?
> Any comments are appreciated!
> Thanks in advance.
> Oliver
Yes. This is easy. Add a TIMESTAMP column to each table where you need
this support. A timestamp column is not a date, but a column that SQL
Server automatically changes each time a row is updated. When you
initially query the row data, select the TIMESTAMP as well. When you
save the data from your stored procedure (ideally, you 'll be using
stored procedures), compare the TIMESTAMP you selected with the
timestamp currently in the row. If they are different, then you know the
data was changed in the interim and can raise an error and have the
client application automatically re-query the data. Your update
statement can look something like this:
Update dbo.MyTable
Set
Col1 = @.Col1,
Col2 = @.Col2
Where
ColPK = @.ColPK
and
timestamp = @.timestamp
If @.@.ROWCOUNT != 1 -- either the row was changed or it no longer exists
RAISERROR ...
Else
-- Everything is good to go
David Gugick - SQL Server MVP
Quest Software|||Hi David,
Thanks for your tips but this sounds like a lot of programming to overcome
a problem that actually is a server's job.
Raising an error message when data changed in the background can only be
useful if there is a user at the other end.
What happens if a program gets caught out by a user e.g. the user changes da
ta
during the time the program read the row to update one column?
You will have to program some code to get arround the problem for every
transaction you are trying to do in a job like stock allocation, release
orders for delivery... - that would be too much work for me as my applicatio
n
is big!
I think it would be much better if the server could check in an update
trigger if there was a concurrent update of columns and just updates the
column(s) that were changed by the current transaction. This way all changes
other transactions did will be kept and nobody has to decide which
data/changes to keep.
I am quite new to SQL server programming and do not know all the ins and
outs of trigger transaction programming.
Could you or somebody else suggest some code how to achieve this?
Thanking you in advance.
Oliver
"David Gugick" wrote:
> Oliver wrote:
> Yes. This is easy. Add a TIMESTAMP column to each table where you need
> this support. A timestamp column is not a date, but a column that SQL
> Server automatically changes each time a row is updated. When you
> initially query the row data, select the TIMESTAMP as well. When you
> save the data from your stored procedure (ideally, you 'll be using
> stored procedures), compare the TIMESTAMP you selected with the
> timestamp currently in the row. If they are different, then you know the
> data was changed in the interim and can raise an error and have the
> client application automatically re-query the data. Your update
> statement can look something like this:
> Update dbo.MyTable
> Set
> Col1 = @.Col1,
> Col2 = @.Col2
> Where
> ColPK = @.ColPK
> and
> timestamp = @.timestamp
> If @.@.ROWCOUNT != 1 -- either the row was changed or it no longer exists
> RAISERROR ...
> Else
> -- Everything is good to go
>
>
> --
> David Gugick - SQL Server MVP
> Quest Software
>|||Oliver wrote:
> Hi David,
> Thanks for your tips but this sounds like a lot of programming to
> overcome
> a problem that actually is a server's job.
> Raising an error message when data changed in the background can only
> be useful if there is a user at the other end.
> What happens if a program gets caught out by a user e.g. the user
> changes data during the time the program read the row to update one
> column?
> You will have to program some code to get arround the problem for
> every transaction you are trying to do in a job like stock
> allocation, release orders for delivery... - that would be too much
> work for me as my application is big!
> I think it would be much better if the server could check in an update
> trigger if there was a concurrent update of columns and just updates
> the column(s) that were changed by the current transaction. This way
> all changes other transactions did will be kept and nobody has to
> decide which data/changes to keep.
> I am quite new to SQL server programming and do not know all the ins
> and
> outs of trigger transaction programming.
> Could you or somebody else suggest some code how to achieve this?
> Thanking you in advance.
> Oliver
I don't agree with your assessment. This is an application programming
issue and not one for the database to manage on its own. You're
suggesting that the server update a row that has been updated in the
interim by another process just because the columns being updated are
not the same. I would argue that there's no way for SQL Server to know
if changing a single column value in a row would somehow affect business
rules and know whether or not the proposed row changes are valid.
Many applications can deal with this scenario by assuming the last
update should be the most current. In that scenario, there is no
additional programming required. If you need to manage concurrency and
changes to a row by another session to avoid overwriting those changes,
you should use a timestamp. You pass the timestamp value with the other
column values to your stored procedure and the procedure does the work.
The application should be able to handle the condition where the row was
not updated because an error was raised and then requery the data and
inform the end-user. All your DML should be done in stored procedures.
In the case where there is no end-user, your application code should log
an error condition that can be managed manually in an interactive
fashion or it could requery the data, assuming this is something it can
work around.
For a nightly batch process, you could lock the entire table within your
transaction.
> What happens if a program gets caught out by a user e.g. the user
> changes data during the time the program read the row to update one
> column?
I'm not sure what you are describing here. A change is a change.
Presumably, you have a stored procedure to manage each type of change to
your data. Maybe you can elaborate on this part a little more.
David Gugick - SQL Server MVP
Quest Software
Wednesday, March 7, 2012
How can i write polling and processing queries
i dont want to fully update the molap cache and i also dont want to fully read the source database that is why i am using polling query to poll the changes and processing query to get the changed records from database. for information about polling and processing query you can check the link
http://msdn2.microsoft.com/en-us/library/ms188965.aspx
but i am unable to write these queries to get all changes i.e insert, update and delete records can any one tell me how can i use these queries OR SQL Server provide some other way to just read the changes in source database not the whole dabatase.
This is
about proactive caching, right? If the changes in the table happen to be
insert, update and delete then the processing will *most probably* need to be
done fully.
When you end up with full processing, if you need to use polling notification
mechanism then the processing query is not needed. In case of deletes or
updates the polling query should fetch something like timestamp of the last
change in the database. Why not to use automatic SQL notifications?
The link you provided discusses polling and processing queries for incremental
processing. It is possible when only new records appear in the table (only
inserts).
There is one case when incremental processing can be used with "inserts,
deletes and updates". Suppose at time t1 you have a state of records S1 of
your table. Suppose you manage to make proactive caching to be scheduled at
well known time t2. I suppose Client Initiated is the only reliable one for
deterministic start of the proactive caching. If between t1 an t2 the inserts,
updates and deletes in the table do not change the records from S1 but only
create and modify incremental records then it is possible to use incremental
processing with proactive caching.