mirror of
https://github.com/quasar/Quasar.git
synced 2026-04-25 23:35:58 +03:00
[GH-ISSUE #97] Compression Algorithm ?? #43
Labels
No labels
bug
bug
cant-reproduce
discussion
duplicate
easy
enhancement
help wanted
improvement
invalid
need more info
pull-request
question
wont-add
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/Quasar#43
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @DragonzMaster on GitHub (May 7, 2015).
Original GitHub issue: https://github.com/quasar/Quasar/issues/97
Is QuickLZ really the best choice as compression algorithm ??
as there is TWO more algorithms do a good job with compression
after i searched for sometime i found that there is Snappy (fastest between them) Algorithm and LZ4 (latest is very fast)
both of them do a great job and they faster (a bit) than QiuckLZ
you can find a comparison between them here :
http://encode.ru/threads/1266-In-memory-benchmark-with-fastest-LZSS-%28QuickLZ-Snappy%29-compressors
@OpenSourceCS commented on GitHub (May 7, 2015):
Yes you are right. LZ4 is pretty good yet there is not yet a pure c# algorithm which is maintained. there are only bindings.
Please see #96 #95 #94 #90 #85
@DragonzMaster commented on GitHub (May 7, 2015):
as for LZ4 i found a library for .net projects
http://lz4net.codeplex.com/
@OpenSourceCS
@OpenSourceCS commented on GitHub (May 7, 2015):
It is a binding project. It requires VC++ 2010 Redist. Moreover the native dll(whatever) of LZ4 (which is written in C) cannot be binded with Client.exe.
@OpenSourceCS commented on GitHub (May 7, 2015):
QuickLZ implemented in xRAT is just a single file written in pure c#. It makes things simple.
@yankejustin commented on GitHub (May 7, 2015):
Hmmm... I can port an algorithm if it is much better than what we currently use. Please know that the article mentioned is 4 years old. If possible, please show me a more recent bench mark and I will certainly convert it to C#. :)
@DragonzMaster commented on GitHub (May 7, 2015):
as for snappy it hasn't changed a lot and you can see here
github.com/google/snappy@eeead8dc38it might help you :)
and i am trying to find newer benchmark for LZ4
@DragonzMaster commented on GitHub (May 7, 2015):
GOOD News
Snappy (the extremely fast algorithm) is available in c# .net (4 years ago)
so you don't need to convert it ... here is the project
https://github.com/Kintaro/SnappySharp
Edit : I want to mention that Google owns Snappy so it should be stable and great 😆
@OpenSourceCS commented on GitHub (May 7, 2015):
need to benchmark snappy and quickLZ before implementing
@DragonzMaster commented on GitHub (May 7, 2015):
i found more benchmarks i hope you find it useful
i want to mention that they are in different Implementation (some are java)
but it shows the difference between them and there performance
LZ4 shows to be the fastest 👍
reference :
http://java-performance.info/performance-general-compression/
https://sites.google.com/site/powturbo/home/benchmark
http://lz4net.codeplex.com/wikipage?title=Comparison%20to%20other%20algorithms&referringTitle=Home
https://code.google.com/p/lz4/
http://catchchallenger.first-world.info//wiki/Quick_Benchmark:_Gzip_vs_Bzip2_vs_LZMA_vs_XZ_vs_LZ4_vs_LZO
@OpenSourceCS commented on GitHub (May 7, 2015):
does anyone find lags in network transmission?. I think QuickLZ is sufficient for now. But after implementing other features, we can come back to this. @MaxXor please start assigning people for specific issues. Thanks!
@yankejustin commented on GitHub (May 7, 2015):
I have no issues with network transmission. ProtoBuf is actually very fast and efficient.
@MaxXor commented on GitHub (May 7, 2015):
DragonHunter recommended that we should use QuickLZ instead of LZ4. I also think we should stick with it.
@yankejustin commented on GitHub (May 7, 2015):
By that, you mean we should continue using what we currently use? I would just like to clarify.
@MaxXor commented on GitHub (May 7, 2015):
Yes, correct. We'll change it when the current compression algorithm is not fast enough for our needs. But currently it is.
@yankejustin commented on GitHub (May 7, 2015):
Okay, thank you for the clarification. I agree.