ClueBot III (talk | contribs) m Archiving 2 discussions to User talk:Usernamekiran/Archive 9. (BOT) |
→Blocked your bot: new section Tag: New topic |
||
Line 505: | Line 505: | ||
*''To stop receiving these notifications, '''[[Wikipedia:Administrators without tools/Endorsers|remove your name from the list]]'''''. |
*''To stop receiving these notifications, '''[[Wikipedia:Administrators without tools/Endorsers|remove your name from the list]]'''''. |
||
|} <!-- From Template:GetMop/Notice --> [[User:Tol|<span style="color:#f542d7">Tol</span>]][[User:TolBot|Bot]] ([[User talk:TolBot|talk]]) 21:00, 29 December 2023 (UTC) |
|} <!-- From Template:GetMop/Notice --> [[User:Tol|<span style="color:#f542d7">Tol</span>]][[User:TolBot|Bot]] ([[User talk:TolBot|talk]]) 21:00, 29 December 2023 (UTC) |
||
== Blocked your bot == |
|||
Hello. I've blocked your bot as it is doing weird things - at [[Wikipedia:In the news/Posted/December 2004]] the text gets smaller and smaller (and contained a {{tl|delete}} which is how I noticed) but more concerningly at [[Wikipedia:In the news/Posted/January 2005]] it's added a NSFW image. I presume that's vandalism it's importing from elsewhere, but regardless, it doesn't strike me as something a bot should be doing. Happy for anyone to unblock so long as you know what it is doing. [[User:Smartse|SmartSE]] ([[User talk:Smartse|talk]]) 19:09, 30 December 2023 (UTC) |
Revision as of 19:09, 30 December 2023
WP:RETENTION: This editor is willing to lend a helping hand. Just ask. |
|
This page has archives. Sections older than 32.5 days may be automatically archived by ClueBot III. |
Please comment on Talk:Useful idiot
The feedback request service is asking for participation in this request for comment on Talk:Useful idiot. Legobot (talk) 04:29, 3 January 2018 (UTC)
Parsing database dumps
import bz2
from lxml import etree
file_path = 'enwiki-latest-pages-articles.xml.bz2'
tag_count = 0
with bz2.open(file_path, 'rb') as file:
context = etree.iterparse(file, events=('end',), tag='{http://www.mediawiki.org/xml/export-0.10/}page')
if tag_count % 1000:
print(tag_count)
for _, elem in context:
tag_count += 1
elem.clear()
while elem.getprevious() is not None:
del elem.getparent()[0]
print("Total page tags:", tag_count)
- @Qwerfjkl: Hi. yes, you are right. The pod gets OOMKilled. It was confusing, not much information was around, and it was pain in the butt. When I was having this (similar) issue, I solved it by loading the xml file in parts, and then breaking the file in chunks based on the closing tags. Only after breaking in chunks, I could process the dump. As soon as I get to the computer, I will upload all the relevant programs to github repository, and will let you know. But that may take 12ish hours from now. —usernamekiran (talk) 18:25, 2 November 2023 (UTC)
- It seems to be split into 27 chunks already (e.g. enwiki-20231101-pages-articles1.xml-p1p41242.bz2), I might try seeing of those work. — Qwerfjkltalk 19:14, 2 November 2023 (UTC)
- @Qwerfjkl: Hi. Sorry, I forgot to mention, we need to unzip the bzip file before processing it to conserve the memory/RAM. The unzip/extraction process takes a lot time though, and the uncompressed file is very big in size, so - on toolforge - it is recommended to delete it after being done with it. But there seems to be something wrong/missing with the way I broke the unzipped xml file into chunks. The results of my next script were not very much accurate. Kindly let me know if there was something wrong with it. Also, please dont mind my rudimentary skills of python. This script also could use better error handling. github repository. —usernamekiran (talk) 12:35, 3 November 2023 (UTC)
- It seems to be split into 27 chunks already (e.g. enwiki-20231101-pages-articles1.xml-p1p41242.bz2), I might try seeing of those work. — Qwerfjkltalk 19:14, 2 November 2023 (UTC)
- Maybe following script would work, I couldn't test it, as I am on a different computer.
import bz2
from lxml import etree
file_path = 'enwiki-latest-pages-articles.xml.bz2'
tag_count = 0
with bz2.open(file_path, 'rb') as file:
context = etree.iterparse(file, events=('end',), tag='{http://www.mediawiki.org/xml/export-0.10/}page')
for _, elem in context:
tag_count += 1
elem.clear()
while elem.getprevious() is not None:
del elem.getparent()[0]
if tag_count % 1000 == 0:
print("Processed", tag_count, "pages")
print("Total page tags:", tag_count)
@Qwerfjkl: —usernamekiran (talk) 15:12, 3 November 2023 (UTC)
- Correct me if I'm wrong, but that seems to essentially be the same as my script. — Qwerfjkltalk 16:29, 3 November 2023 (UTC)
- @Qwerfjkl: yeah you are right. When I first looked at your script, I thought (mistakenly) that something was wrong wrong with the counting method. So I pasted your script in windows notepad, and then got busy in IRL, when I came back I started to edit your script. In short, there is no issue with your script. Sorry about the muck-up. Did you try breaking the original/unzipped xml file into smaller xml files, and then parsing these files? "etree.iterparse" wouldnt consume much memory, and the script is also removing the processed element(s) from memory, I am not sure why this might be failing. are you sure it is OOMKilled? what do the .err/.out files say? Maybe lxml dependency was not installed on your toolforge correctly? —usernamekiran (talk) 16:53, 3 November 2023 (UTC)
- @Qwerfjkl: Hi. Currently I'm out of town, with no computer. I will look into this as soon as I get back, hopefully we can find the issue. —usernamekiran (talk) 13:53, 5 November 2023 (UTC)
- @Qwerfjkl: Hi. I am back home. Did you get any new information regarding the parsing? —usernamekiran (talk) 18:25, 12 November 2023 (UTC)
- No, I've been taking a break to work on some personal coding projects. — Qwerfjkltalk 21:20, 12 November 2023 (UTC)
- @Qwerfjkl: Hi. I tested your script through cron. The only change I made was to save the progress to a text file instead of printing it. Worked fine for me. —usernamekiran (talk) 19:07, 15 November 2023 (UTC)
- I restarted the task, the start, and end will be documented at https://test.wikipedia.org/wiki/User:KiranBOT/MOSTREFS/log —usernamekiran (talk) 19:21, 15 November 2023 (UTC)
- Thanks, saving the output to a text file seems to have worked for me. — Qwerfjkltalk 20:36, 15 November 2023 (UTC)
- I've being indexing the pages with the code below, but it seems to run out of memory after about 3 million pages (at least, it's killed; I assume memory is the problem). Do you know any way to fix this?— Qwerfjkltalk 12:55, 17 November 2023 (UTC)
def init_memory_index_of_articles(file_path): articles = set() redirects = dict() manually_excluded_pages = set() printm("Building index of Articles ...") with bz2.open(file_path, 'rb') as file: context = etree.iterparse(file, events=('end',), tag='{http://www.mediawiki.org/xml/export-0.10/}page') count = 0 for _, elem in context: count +=1 if count % 100000 == 0: printm(count) title_element = elem.find('{http://www.mediawiki.org/xml/export-0.10/}title') text_element = elem.find('{http://www.mediawiki.org/xml/export-0.10/}revision/{http://www.mediawiki.org/xml/export-0.10/}text') # Ensure the title and text elements are found if title_element is not None and text_element is not None: title = title_element text text = text_element text if not text: continue # Check pagetype: redirect_element = elem.find('{http://www.mediawiki.org/xml/export-0.10/}redirect') if redirect_element is not None: redirect_target = redirect_element.get('title') redirects[title] = redirect_target # printm(f'Processed {title}, is redirect') elif '{{disambig' in text.lower(): manually_excluded_pages.add(title) # exclude dabs # printm(f'Processed {title}, is dab') else: articles.add(title) # printm(f'Processed {title}, is article') # Clear the element from memory elem.clear() # Also eliminate now-empty references from the root node to elem while elem.getprevious() is not None: del elem.getparent()[0] printm("Completed creating memory index ...") return redirects, articles
- @Qwerfjkl: Hi. Currently I do not have dump in my toolforge userspace, I wanted to copy it from your userspace, I cant find it in your userspace either. Maybe it is not public, or are you trying to process the file remotely? Generally speaking, manual script running from terminal (from virtual env) or from bastion/shell for resource heavy, or time consuming tasks is not recommended. scheduled jobs from yaml file is more efficient (than one-off jobs). If you create a yaml file as described here, you would get better/automated logging. For one-off, or testing purposes, I use crons similar to "55 19 18 11 *". I am saying this because I am getting an inkling that you are running the script from terminal. Maybe thats why it is accumulating memory even though you are clearing each element after processing it. Other than that, I cant think of anything else for now. Kindly let me know if you can think of anything. —usernamekiran (talk) 16:35, 17 November 2023 (UTC)
- I noticed it was stored on the Toolforge servers so I switched to usingI have been running from terminal, yes. I'll try switching to yaml files and see if that helps. — Qwerfjkltalk 17:15, 17 November 2023 (UTC)
file_path = '/mnt/nfs/dumps-clouddumps1002.wikimedia.org/enwiki/latest/enwiki-latest-pages-articles.xml.bz2'
- It turns out the very large dictionary was causing the memory issue. I'm going to try a different approach, using a multistream version. — Qwerfjkltalk 14:22, 18 November 2023 (UTC)
- @Qwerfjkl: Hi. I had worked on the dump script almost an year ago, I think I had tried the multistream approach too, but I changed my approach halfway. To circumvent the memory issues, you should save the output to multiple files eg redirects_1.json, redirects_2.json articles_1.json and so on. I think you should keep the maximum number of entries to these files either 5000 or 10000, definitely not more than 10k. —usernamekiran (talk) 10:48, 21 November 2023 (UTC)
- It turns out the very large dictionary was causing the memory issue. I'm going to try a different approach, using a multistream version. — Qwerfjkltalk 14:22, 18 November 2023 (UTC)
- I noticed it was stored on the Toolforge servers so I switched to using
- @Qwerfjkl: Hi. Currently I do not have dump in my toolforge userspace, I wanted to copy it from your userspace, I cant find it in your userspace either. Maybe it is not public, or are you trying to process the file remotely? Generally speaking, manual script running from terminal (from virtual env) or from bastion/shell for resource heavy, or time consuming tasks is not recommended. scheduled jobs from yaml file is more efficient (than one-off jobs). If you create a yaml file as described here, you would get better/automated logging. For one-off, or testing purposes, I use crons similar to "55 19 18 11 *". I am saying this because I am getting an inkling that you are running the script from terminal. Maybe thats why it is accumulating memory even though you are clearing each element after processing it. Other than that, I cant think of anything else for now. Kindly let me know if you can think of anything. —usernamekiran (talk) 16:35, 17 November 2023 (UTC)
- I've being indexing the pages with the code below, but it seems to run out of memory after about 3 million pages (at least, it's killed; I assume memory is the problem). Do you know any way to fix this?
- Thanks, saving the output to a text file seems to have worked for me. — Qwerfjkltalk 20:36, 15 November 2023 (UTC)
- No, I've been taking a break to work on some personal coding projects. — Qwerfjkltalk 21:20, 12 November 2023 (UTC)
- @Qwerfjkl: Hi. I am back home. Did you get any new information regarding the parsing? —usernamekiran (talk) 18:25, 12 November 2023 (UTC)
- ┌───────────────────────────┘
So you suggest creating 2300 json files? — Qwerfjkltalk 15:06, 21 November 2023 (UTC)- @Qwerfjkl: yes. If you want/need, then at the end of the script, or through another script you can combine all these files together, and delete the ~2400 files. —usernamekiran (talk) 16:23, 21 November 2023 (UTC)
- But surely reading those files for usage would just overload the memory, especially if combined? — Qwerfjkltalk 16:36, 21 November 2023 (UTC)
- @Qwerfjkl: not much, by my estimation, the complete operation of merging the files would need around 550 MiB. —usernamekiran (talk) 20:40, 21 November 2023 (UTC)
- I see, I'll try that. My question is, I suppose, if we can load it by reading json files, why can't we store it by reading xml dumps? — Qwerfjkltalk 21:24, 21 November 2023 (UTC)
- @Qwerfjkl: it is related to memory accumulation, and python's garbage collector. and we will be using streaming approach to merge the files, which should be memory efficient. I cant be 100% sure though. I have started a test run on toolforge a few minutes ago, lets see how it goes (so far, the program is using around 40Mi). As I was not sure why do you need the data, and I had to make some minor changes in the script, the output would be in txt files, in following format:
- I see, I'll try that. My question is, I suppose, if we can load it by reading json files, why can't we store it by reading xml dumps? — Qwerfjkltalk 21:24, 21 November 2023 (UTC)
- @Qwerfjkl: not much, by my estimation, the complete operation of merging the files would need around 550 MiB. —usernamekiran (talk) 20:40, 21 November 2023 (UTC)
- But surely reading those files for usage would just overload the memory, especially if combined? — Qwerfjkltalk 16:36, 21 November 2023 (UTC)
- @Qwerfjkl: yes. If you want/need, then at the end of the script, or through another script you can combine all these files together, and delete the ~2400 files. —usernamekiran (talk) 16:23, 21 November 2023 (UTC)
'Anarchism', 'Albedo', 'A',
and
'AccessibleComputing': 'Computer accessibility', 'AfghanistanHistory': 'History of Afghanistan', 'AfghanistanGeography': 'Geography of Afghanistan',
I hope that format is okay for you. —usernamekiran (talk) 12:52, 22 November 2023 (UTC)
- All I need is, for a given page title, to check whether it exists, and if it does, is it a redirect (and if so what is the redirect target). — Qwerfjkltalk 15:29, 22 November 2023 (UTC)
- @Qwerfjkl: the script finished successfully with created the file for articles with 378 MBs, and files for redirects for 555 MBs. and the highest memory value consumed by the script was 108.4 MiB. —usernamekiran (talk) 18:54, 22 November 2023 (UTC)
- My issue would be when reading the files, especially if they were combined (for my purposes, I think I'll just represent redirects as 'RedirectTitle': 'RedirectTarget' and articles as 'ArticleTitle: '', in the same file). — Qwerfjkltalk 19:36, 22 November 2023 (UTC)
- Okay, I've implemeted this. I have a 2 GB JSON file containing the relevant data, but when I try to load it into memory using
JSON.load(file)
after about a minute it gets killed. — Qwerfjkltalk 15:52, 29 November 2023 (UTC)- I'm currently trying storing the data on a SQL database. — Qwerfjkltalk 17:39, 9 December 2023 (UTC)
- @Qwerfjkl: Hi, sorry for the delayed reply, I was feeling a little under the weather. Unfortunately, I do not have any experience with databases, so I cant help you much. What are you trying to achieve, I mean whats your goal/target? Maybe I/we can come-up with some workaround bypassing the use of SQL? —usernamekiran (talk) 10:32, 16 December 2023 (UTC)
- I'm trying to restart the Missing Redir4ects Project. You can see this for an example of the output I'm trying to get. — Qwerfjkltalk 11:31, 16 December 2023 (UTC)
- @Qwerfjkl: Hi, sorry for the delayed reply, I was feeling a little under the weather. Unfortunately, I do not have any experience with databases, so I cant help you much. What are you trying to achieve, I mean whats your goal/target? Maybe I/we can come-up with some workaround bypassing the use of SQL? —usernamekiran (talk) 10:32, 16 December 2023 (UTC)
- I'm currently trying storing the data on a SQL database. — Qwerfjkltalk 17:39, 9 December 2023 (UTC)
- Okay, I've implemeted this. I have a 2 GB JSON file containing the relevant data, but when I try to load it into memory using
- My issue would be when reading the files, especially if they were combined (for my purposes, I think I'll just represent redirects as 'RedirectTitle': 'RedirectTarget' and articles as 'ArticleTitle: '', in the same file). — Qwerfjkltalk 19:36, 22 November 2023 (UTC)
- @Qwerfjkl: the script finished successfully with created the file for articles with 378 MBs, and files for redirects for 555 MBs. and the highest memory value consumed by the script was 108.4 MiB. —usernamekiran (talk) 18:54, 22 November 2023 (UTC)
Feedback request: History and geography request for comment
You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name. Message delivered to you with love by Yapperbot :) | Is this wrong? Contact my bot operator. | Sent at 19:30, 13 November 2023 (UTC)
Feedback request: Maths, science, and technology request for comment
You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name. Message delivered to you with love by Yapperbot :) | Is this wrong? Contact my bot operator. | Sent at 08:30, 20 November 2023 (UTC)
Tech News: 2023-48
MediaWiki message delivery 23:07, 27 November 2023 (UTC)
ArbCom 2023 Elections voter message
Hello! Voting in the 2023 Arbitration Committee elections is now open until 23:59 (UTC) on Monday, 11 December 2023. All eligible users are allowed to vote. Users with alternate accounts may only vote once.
The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.
If you wish to participate in the 2023 election, please review the candidates and submit your choices on the voting page. If you no longer wish to receive these messages, you may add {{NoACEMM}}
to your user talk page. MediaWiki message delivery (talk) 00:41, 28 November 2023 (UTC)
Notification of administrators without tools
Greetings, Usernamekiran. You are receiving this notification because you've agreed to consider endorsing prospective admin candidates identified by the process outlined at Administrators without tools. Recently, the following editor(s) received this distinction and the associated endearing title: | |
|
The Signpost: 4 December 2023
- In the media: Turmoil on Hebrew Wikipedia, grave dancing, Olga's impact and inspiring Bhutanese nuns
- Disinformation report: "Wikipedia and the assault on history"
- Comix: Bold comics for a new age
- Essay: I am going to die
- Featured content: Real gangsters move in silence
- Traffic report: And it's hard to watch some cricket, in the cold November Rain
- Humour: Mandy Rice-Davies Applies
Tech News: 2023-49
MediaWiki message delivery 23:48, 4 December 2023 (UTC)
Feedback request: History and geography request for comment
You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name. Message delivered to you with love by Yapperbot :) | Is this wrong? Contact my bot operator. | Sent at 21:30, 6 December 2023 (UTC)
Question from Arvelynalmeda15 (00:16, 8 December 2023)
how can i add about me... myself information --Arvelynalmeda15 (talk) 00:16, 8 December 2023 (UTC)
Feedback request: Politics, government, and law request for comment
You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name. Message delivered to you with love by Yapperbot :) | Is this wrong? Contact my bot operator. | Sent at 06:30, 8 December 2023 (UTC)
Question from Sujan the search engineer (10:01, 8 December 2023)
I've added a company's details with references. Could you please review it --Sujan the search engineer (talk) 10:01, 8 December 2023 (UTC)
Administrators' newsletter – December 2023
News and updates for administrators from the past month (November 2023).
-
- Ajpolino
- Lourdes
- Mairi
- RockMFR
- Somno
- WilyD
- Beeblebrox → Just Step Sideways
- Following a talk page discussion, the Administrators' accountability policy has been updated to note that while it is considered best practice for administrators to have notifications (pings) enabled, this is not mandatory. Administrators who do not use notifications are now strongly encouraged to indicate this on their user page.
- Following a motion, the Extended Confirmed Restriction has been amended, removing the allowance for non-extended-confirmed editors to post constructive comments on the "Talk:" namespace. Now, non-extended-confirmed editors may use the "Talk:" namespace solely to make edit requests related to articles within the topic area, provided that their actions are not disruptive.
- The Arbitration Committee has announced a call for Checkusers and Oversighters, stating that it will currently be accepting applications for CheckUser and/or Oversight permissions at any point in the year.
- Eligible users are invited to vote on candidates for the Arbitration Committee until 23:59 December 11, 2023 (UTC). Candidate statements can be seen here.
Guild of Copy Editors December 2023 Newsletter
Guild of Copy Editors December 2023 Newsletter
Hello, and welcome to the December 2023 newsletter, a quarterly digest of Guild activities since September. Don't forget that you can unsubscribe at any time; see below. Election news: The Guild needs coordinators! If you'd like to help out, you may nominate yourself or any suitable editor—with their permission—for the Election of Coordinators for the first half of 2024. Nominations will close at 23:59 on 15 December (UTC). Voting begins immediately after the close of nominations and closes at 23:59 on 31 December. All editors in good standing (not under current sanctions) are eligible, and self-nominations are welcome. Coordinators normally serve a six-month term that ends at 23:59 on 30 June. Drive: Of the 69 editors who signed up for the September Backlog Elimination Drive, 40 copy-edited at least one article. Between them, they copy-edited 661,214 words in 290 articles. Barnstars awarded are listed here. Blitz: Of the 22 editors who signed up for the October Copy Editing Blitz, 13 copy-edited at least one article. Between them, they copy-edited 109,327 words in 52 articles. Barnstars awarded are listed here. Drive: During the November Backlog Elimination Drive, 38 of the 58 editors who signed up copy-edited at least one article. Between them, they copy-edited 458,620 words in 234 articles. Barnstars awarded are listed here. Blitz: Our December Copy Editing Blitz will run from 10 to 16 December. Barnstars awarded will be posted here. Progress report: As of 20:33, 10 December 2023 (UTC), GOCE copyeditors have processed 344 requests since 1 January, and the backlog stands at 2,191 articles. Other news: Our Annual Report for 2023 is planned for release in the new year. Thank you all again for your participation; we wouldn't be able to achieve what we have without you! Cheers from your GOCE coordinators Dhtwiki, Miniapolis and Zippybonzo. To discontinue receiving GOCE newsletters, please remove your name from our mailing list.
|
Message sent by Baffle gab1978 using MediaWiki message delivery (talk) 20:54, 10 December 2023 (UTC)
Tech News: 2023-50
MediaWiki message delivery 02:11, 12 December 2023 (UTC)
Notification of administrators without tools
Greetings, Usernamekiran. You are receiving this notification because you've agreed to consider endorsing prospective admin candidates identified by the process outlined at Administrators without tools. Recently, the following editor(s) received this distinction and the associated endearing title: | |
|
Question from Jeymstuncer (09:11, 13 December 2023)
Hello, I am a series music composer. I have TV series published. these series have accounts on wikipedia. how do I tag my name there. --Jeymstuncer (talk) 09:11, 13 December 2023 (UTC)
- @Jeymstuncer: Hello. If there is a reliable source mentioning your name as composer, then you should add something like "XYZ is the composer of this TV series
<ref>link to relibale source</ref>
" to the article of that TV series. The link to source should be inside the ref tags. You will find more information about reliable sources at WP:RS, but most importantly, kindly read WP:COI first of all things. —usernamekiran (talk) 10:23, 16 December 2023 (UTC)
Question from Amigopet (06:47, 16 December 2023)
Hello “Houseblaster” and ‘Userrnamekiran’, thank you for offering to help me.
How do you link part of the edited text to a name that is already on Wikipedia? --Amigopet (talk) 06:47, 16 December 2023 (UTC)
- @Amigopet: Hello. You have to put that particular text in two square brackets. You will be able to see that type of brackets whenever you are editing text with links. You will be able to find more information at Help:Wikitext#Wikilinks. —usernamekiran (talk) 10:13, 16 December 2023 (UTC)
Question from Amigopet (20:36, 17 December 2023)
I am a member of the Royal Queensland Art Society (Brisbane Branch) which is a 137 year old non for profit organisation. Although we have 3 part-time paid staff, most of the work for the Society is done by volunteers such as me. I have noticed many Wikipedia articles about well-known deceased artists who were active members, and often office bearers of the Society, have little or no reference to the Society. My question is: is there a conflict of interest if I edit such articles to reflect the connection of these artists to the Society? I will only use published books as reference sources.
Thank you for your guidance. --Amigopet (talk) 20:36, 17 December 2023 (UTC)
Notification of administrators without tools
Greetings, Usernamekiran. You are receiving this notification because you've agreed to consider endorsing prospective admin candidates identified by the process outlined at Administrators without tools. Recently, the following editor(s) received this distinction and the associated endearing title: | |
|
Tech News: 2023-51
MediaWiki message delivery 16:16, 18 December 2023 (UTC)
Question from Hello ji mein to humlog bhi flight mein to humlog on Tose Naina Milaai Ke (07:56, 19 December 2023)
Hello ji kiya kar rahe ho --Hello ji mein to humlog bhi flight mein to humlog (talk) 07:56, 19 December 2023 (UTC)
New pages patrol January 2024 Backlog drive
New Page Patrol | January 2024 Articles Backlog Drive | |
| |
You're receiving this message because you are a new page patroller. To opt-out of future mailings, please remove yourself here. |
MediaWiki message delivery (talk) 02:11, 20 December 2023 (UTC)
Voting for the WikiProject Military History newcomer of the year and military historian of the year awards for 2023 is now open!
Voting is now open for the WikiProject Military History newcomer of the year and military historian of the year awards for 2023! The the top editors will be awarded the coveted Gold Wiki . Cast your votes vote here and here respectively. Voting closes at 23:59 on 30 December 2023. On behalf of the coordinators, wishing you the very best for the festive season and the new year. Hawkeye7 (talk · contribs) via MediaWiki message delivery (talk) 23:56, 22 December 2023 (UTC)
The Signpost: 24 December 2023
- Special report: Did the Chinese Communist Party send astroturfers to sabotage a hacktivist's Wikipedia article?
- News and notes: The Italian Public Domain wars continue, Wikimedia RU set to dissolve, and a recap of WLM 2023
- In the media: Consider the humble fork
- Discussion report: Arabic Wikipedia blackout; Wikimedians discuss SpongeBob, copyrights, and AI
- In focus: Liquidation of Wikimedia RU
- Technology report: Dark mode is coming
- Recent research: "LLMs Know More, Hallucinate Less" with Wikidata
- Gallery: A feast of holidays and carols
- Comix: Lollus lmaois 200C tincture
- Crossword: when the crossword is sus
- Traffic report: What's the big deal? I'm an animal!
- From the editor: A piccy iz worth OVAR 9000!!!11oneone! wordz ^_^
- Humour: Guess the joke contest
Scripts++ Newsletter – Issue 23
Notification of administrators without tools
Greetings, Usernamekiran. You are receiving this notification because you've agreed to consider endorsing prospective admin candidates identified by the process outlined at Administrators without tools. Recently, the following editor(s) received this distinction and the associated endearing title: | |
|
Blocked your bot
Hello. I've blocked your bot as it is doing weird things - at Wikipedia:In the news/Posted/December 2004 the text gets smaller and smaller (and contained a {{delete}} which is how I noticed) but more concerningly at Wikipedia:In the news/Posted/January 2005 it's added a NSFW image. I presume that's vandalism it's importing from elsewhere, but regardless, it doesn't strike me as something a bot should be doing. Happy for anyone to unblock so long as you know what it is doing. SmartSE (talk) 19:09, 30 December 2023 (UTC)