mirror of
https://github.com/ArchiveBox/ArchiveBox.git
synced 2026-04-25 09:06:02 +03:00
[GH-ISSUE #1005] Feature Request: How to change db engine to mysql or mangodb #628
Labels
No labels
expected: maybe someday
expected: next release
expected: release after next
expected: unlikely unless contributed
good first ticket
help wanted
pull-request
scope: all users
scope: windows users
size: easy
size: hard
size: medium
size: medium
status: backlog
status: blocked
status: done
status: idea-phase
status: needs followup
status: wip
status: wontfix
touches: API/CLI/Spec
touches: configuration
touches: data/schema/architecture
touches: dependencies/packaging
touches: docs
touches: js
touches: views/replayers/html/css
why: correctness
why: functionality
why: performance
why: security
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/ArchiveBox#628
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @thinker007 on GitHub (Jul 27, 2022).
Original GitHub issue: https://github.com/ArchiveBox/ArchiveBox/issues/1005
Type
What is the problem that your feature request solves
I need to archive large scale web page, need a better db solution.
Describe the ideal specific solution you'd want, and whether it fits into any broader scope of changes
change db from sqlite3 to mysql or mangodb for better archive performance and large scale web page archieve
What hacks or alternative solutions have you tried to solve the problem?
move data from sqlite3 to mysql manually.
How badly do you want this new feature?
@pirate commented on GitHub (Jul 30, 2022):
Not supported currently and not going to support this within the next few years (or likely ever), sorry. SQLite is a much more durable data format than Mysql or Postgresql, and it's easier on many levels because of the lack of need to run a dedicated server process.
This is also a common misconception. SQLite can handle multi terabyte DBs and massive scale applications, it's more about the need for concurrent writers than the size of the DB or anything "web scale".
See here for more explanation:
@thinker007 commented on GitHub (Aug 17, 2022):
when I try to archive 1 million urls, then the db became a bottle neck ! the db lock problem pops up frequently
@pirate commented on GitHub (Aug 20, 2022):
Are you archiving with multiple parallel processes? With a single process there is no db lock contention. It's more likely the parallel processes that are causing lock contention, not the size of the archive.
Please screenshot the errors you see with any log output.
should_save_extractormethods to acceptoverwriteparameter #1253should_save_extractormethods to acceptoverwriteparameter #2762should_save_extractormethods to acceptoverwriteparameter #4265