![]() At the end of the day, you still have to go to disk on writes, and this must be serialized against reads for basic consistency reasons. I am not aware of any hosted SQL technology which is capable of magically interleaving large aggregate queries with live transactions and not having one or both impacted in some way. They go out to all of the SQLite databases, make a copy and then run analysis in another process (or on another machine). We have some telemetry services which operate in this fashion. ![]() If we wanted to run an aggregate that could potentially impact live transactions, we would just copy the SQLite db to another server and perform the analysis there. I would argue it is a bad practice in general to mix OLTP and OLAP workloads on a single database instance, regardless of the specific technology involved. We aren't running any reports on our databases like this. We support hundreds of simultaneous users in production with 1-10 megs of business state tracked per user in a single SQLite database. Something about it living in the same process as your business application seems to help a lot. SQLite has the lowest access latency that I am aware of. I do not think you can persist a row to disk faster with any other traditional SQL technology. If at this point you are still finding that SQLite is not as fast or faster than SQL Server, MySQL, et.al., then I would be very surprised. ![]() Execute one time against a fresh DB: PRAGMA journal_mode=WAL Trying to use the one connection per query approach with SQLite is going to end very badly.Ģ. ![]() SQLite operates in serialized mode by default, so the only time you need to lock is when you are trying to obtain the LastInsertRowId or perform explicit transactions across multiple rows. Only use a single connection for all access. If you are struggling to beat the "one query per second" meme, try the following 2 things:ġ. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |