mirror of
https://github.com/tzapu/WiFiManager.git
synced 2026-04-27 17:15:53 +03:00
[GH-ISSUE #26] Duplicate Wifi networks #24
Labels
No labels
📶 WiFi
🕸️ HTTP
Branch
DEV Help Wanted
Discussion
Documentation
ESP32
Example
Good First Issue
Hotfix
In Progress
Incomplete
Needs Feeback
Priority
QA
Question
Task
Upstream/Dependancy
bug
duplicate
enhancement
invalid
pull-request
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/WiFiManager#24
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @lunanigra on GitHub (Dec 22, 2015).
Original GitHub issue: https://github.com/tzapu/WiFiManager/issues/26
Hello, when scanning in our company all Wifi networks are listed 2-3 times.
I didn't had the chance for further investigation; but I assume following reason... We have several access points providing the same Wifi network. Means we can use network XYZ inside the whole building on all floors. So, assumption is that a Wifi network is getting listed multiple times if multiple access points are in range.
Is there a chance putting all items with the same name together?
Thanks, JC
@tablatronix commented on GitHub (Feb 18, 2016):
👍
I had this on one of my other issues and it was closed, I forgot to pull it out into its own issues.
This becomes a bit more complicated quickly as there could be some secure some open, different signal levels so you gotta max the best one for display, maybe also show the count (n) so you know it is a institutional AP.
It almost becomes easier to do in js.
@tzapu commented on GitHub (Feb 19, 2016):
it would be easier to do in js... maybe it s worth considering, as then you would only have to pass the array to js, and a progmem string containing the actual js...
what i m mostly worried about is uncontrolable growth and memory usage with this, which is not justified in my mind as you are really only using the lib once in a blue moon to make your life easier, but decreases resources on your ESP at all times...
@tzapu commented on GitHub (Mar 8, 2016):
i am close this, don t think there s a big need for it really
if anyone strongly feels different, reopen for another discussion
@tablatronix commented on GitHub (Mar 8, 2016):
Im gonna pr it regardless
@tzapu commented on GitHub (Mar 8, 2016):
:))
thank you, i ll reopen then
@tzapu commented on GitHub (Mar 9, 2016):
@tablatronix Shawn, i just have a thought.
If you are considering going down the javascript route for this, which it would be a lot easier, there a new function i added to add custom <head> elements. ( `setCustomHeadElement("")
it could be then offered as a custom head/script snippet that anyone can add, and whoever is not interested can choose to skip it.
there could be a whole range of customisations offered like this for maximum flexibility
@tablatronix commented on GitHub (Mar 9, 2016):
I was gonna just do another n2 loop and remove them but only if there are duplicates detected, but I noticed you changed your sort loop to std::sort comparison
Is it any faster ?
I guess a map might work, I barely know C though and had a hard time trying to modify references in a lambda. shrug.
I was also going to use the sort loop to check if there was any dups before bothering another loop.
Or do them both in one loop, but it wound up being easier to not combine the two in case the sort algorithm needed to change for memory vs speed on different platforms. I imagine n2 loop is pretty fast even at 16mhz...
Ill look at the JS solution again since you added that callout.
@tzapu commented on GitHub (Mar 9, 2016):
welcome to the club, i barely know c as well :)
the std::sort came from a discussion on how to best do it. i doubt very much that at the amount of n s we have, it will make any difference. it was more of a try and see if you can do it kind of thing :d
any way of doing it would probably not have much impact, it might make sense to pull the networks in an array before any looping first, that might make it faster...
js would be interesting, maybe with a count as well, but that would probably waste the most memory as well...
@tablatronix commented on GitHub (Mar 9, 2016):
yeah i looked at a hash table and it was not worth even looking at for memory, the extra microsecond for a n2 loop was better, for eg i have 22 duplicate network at this one location, more elsewhere.
hmm but there will probably only ever be 1 or 2 duplicate ids...
@tablatronix commented on GitHub (Mar 9, 2016):
Well it looks like std::sort uses quicksort and is around 30% faster in my real world test.
@tablatronix commented on GitHub (Mar 9, 2016):
Before


After
@tablatronix commented on GitHub (Mar 9, 2016):
I added
_removeDupApsglobal in case you want to expose it or toggle it.Adds 7ms to my code, for 22 dups. negligible.
@lunanigra commented on GitHub (Mar 9, 2016):
Cool :-)
@tzapu commented on GitHub (Mar 10, 2016):
that s perfect, will pull it in a bit later and add the toggle for it
boy you ve got lots of networks :))