QuestionId
stringlengths 8
8
| AnswerId
stringlengths 8
8
| QuestionBody
stringlengths 91
22.3k
| QuestionTitle
stringlengths 17
149
| AnswerBody
stringlengths 48
20.9k
|
---|---|---|---|---|
76383987 | 76384806 | I am following this tutorial on creating notifications
However, I cannot seem to do the following:
NotificationManager notificationManager = getSystemService(NotificationManager.class);
Because my Android studio reports the error:
Required Type: Context
Provided: Class <android.app.NotificationManager>
reason: Class<NotificationManager> is not compatible with Context
Please let me know how to resolve this issue.
| Notification Manager getSystemService() call not working | I found this was my solution:
NotificationManager notificationManager = (NotificationManager) context.getSystemService(Context.NOTIFICATION_SERVICE);
|
76382344 | 76382710 | In modern Fortran, we can call a subroutine from C by using C binding. E.g. The Fortran subroutine will look like this
subroutine my_routine(...) bind (C, name="my_routine")
However, if the fortran subroutine is an old f77 subroutine, this binding is presumably not an available solution. What would be the best alternative?
| Can an f77 subroutine be called from C? |
However, if the fortran subroutine is an old f77 subroutine, this binding is presumably not an available solution.
It depends. Modern Fortran is largely backwards compatible with Fortran 77. Your F77 subroutine will not come with a bind attribute, but if you're willing to add one and compile it with a modern Fortran compiler then chances are pretty good that it will work just as you expect of code originally written for a newer Fortran version.
On the other hand, what C interop brings to Fortran is standardization, not a new capability. People have been calling Fortran routines from C almost as long as people have been using C.
The issue here is that the specifics are system and compiler dependent. If you don't rely on C interop, then you will need to accommodate your Fortran compiler's name-mangling and argument-passing conventions to successfully call Fortran-compiled functions from C code. These vary. There used to be a fair amount of tools and documentation for such activity. I imagine it's harder to find these days, but much of it should still work.
What would be the best alternative?
That depends on your specific constraints. If you're up for modifying the Fortran source and compiling with a modern compiler then I'd at least start with adding a bind attribute to the function, and see how that plays out.
|
76382433 | 76382759 | I apologize from the start as I am not allowed to share the workbook I am working on (it has confidential information from work) but I will do my best to explain what is happening and what my issue is.
I have two sheets, "Tracker" and "Reviewers". In the tracker names are recorded in column L and their submission is recorded in column M. Everything runs on a serial code in column A so there are blank cells between names. Some people have multiple submissions so their names show multiple times in column L. In the reviewers sheet, I have:
=UNIQUE(FILTER(Tracker!L4:L4999,Tracker!L4:L4999<>0))
In cell A2 to pull all the names of people who have a submission. This works flawlessly and adapts to include any new people. Then in cell B2 I have written:
=SUMPRODUCT(IF(ISBLANK(FILTER(Tracker!$L$4:$M$4999,Tracker!$L$4:$L$4999=Reviewers!A2#))=TRUE,1,0))
The idea here was to get a count of how many "submissions" people have without actually writing anything. It is filtering the list of names and submissions by name in the list we just created, checking if their "submission" is a blank cell, then adding them up. Issue is that it works when I filter by cell A2 but not when I filter by the function that spills out of cell A2 (A2#). I need it to be adaptive so if new names are added it can make the list longer, hence why I cannot just pull the cells down the list (A2, A3, A4,...). How would you go about getting a check of how many are blank like this?
As an example, Tracker could have:
Name
Submission
Jim
Idea
Bob
Idea
Pam
Sam
Idea
Jim
Bob
Idea
Jim
Pam
Idea
And Reviewers should return:
Name
#Blank
Jim
2
Bob
0
Pam
1
Sam
0
I hope this makes sense and I hope you can help me edit the equation in cell B2 of the Reviewers sheet to be adaptive and spill the results.
| How can I make an adaptive list to check against another adaptive list? | =LET(d,DROP(FILTER(A:B,A:A<>""),1),
n,INDEX(d,,1),
s,INDEX(d,,2),
u,UNIQUE(n),
m,MMULT(--(TOROW(n)=u),--(s="")),
HSTACK(u,m))
Change the filter range (and maybe the lines to drop) and the index numbers to your situation.
I think this would work in your case:
=LET(d,DROP(FILTER(Tracker!$L$4:$M$4999,Tracker!$M$4:$M$4999<>""),1),
n,INDEX(d,,1),
s,INDEX(d,,2),
u,UNIQUE(n),
m,MMULT(--(TOROW(n)=u),--(s="")),
HSTACK(u,m))
|
76382452 | 76382772 | Is it technically impossible to show the data outside of the list? I searched through the internet but I couldn't get any answers at all smh -_-
I wanted to display the value of data rows of the list besides of the section ListViewBuilder.
Output:
[ ListView Builder Screen ]
Name: You, Age: 20 Name: Him, Age: 20
An Output photo there.
enter image description here
String name = userList[index].name;
int? age = userList[index].age;
class _Passenger extends State<Passenger> {
TextEditingController nameController = TextEditingController();
TextEditingController ageController = TextEditingController();
int currentIndex = 0;
final form = GlobalKey<FormState>();
bool update = false;
final User user = User(name: "", age: int.tryParse(''));
List<User> userList = [
User(name: "You", age: 20),
User(name: "Him", age: 20),
];
String text = '';
int? number = int.tryParse('');
@override
Widget build(BuildContext context) {
return MaterialApp( debugShowCheckedModeBanner: false,
home: Scaffold(
body: Column(children: <Widget>[
Column(
children: <Widget>[
Container(
height: 550,
decoration: BoxDecoration(border: Border.all(color: Colors.black)),
child: ListView.builder(
itemCount: userList.length,
itemBuilder: (context, index) {
String name = userList[index].name;
int? age = userList[index].age;
return SizedBox(
width: 20,
child: Card(
color: Colors.grey,
child: Padding(
padding: const EdgeInsets.all(2.0),
child: ListTile(
title: Text( "Name: $name Age: $age"),
))),
);
}),
),
],
),
Container( child: Text("Display the data here, How?") ),
//Add Button
Container(
width: 150,
height: 50,
margin: const EdgeInsets.all(10),
child: ElevatedButton(
onPressed: () {
showDialog(
context: context,
builder: (context) => SimpleDialog(children: [
TextField(
decoration: const InputDecoration(labelText: 'Name'),
onChanged: (value) {
setState(() {
text = value;
});
},
),
TextField(
keyboardType: TextInputType.number,
decoration: const InputDecoration(labelText: 'Age'),
onChanged: (value) {
setState(() {
number = int.parse(value);
});
},
),
ElevatedButton(
onPressed: () {
setState(() {
userList.add(User(name: text, age: number));
});
},
child: const Text('Add'))
]));
},
child: const Text("Add"),
)),
])));
}
}
class User {
String name;
int? age;
User({
required this.name,
this.age,
});
}
| How to show the data of the list outside of the area of ListView Builder in Flutter? | So... if it's the same list, just add this :
Instead :
Container( child: Text("Display the data here, How?") )
Do :
SingleChildScrollView(
scrollDirection: Axis.horizontal,
child: Row(
mainAxisAlignment: MainAxisAlignment.center,
children: userList
.map((user) => Text("Name: ${user.name}, Age: ${user.age} "))
.toList(),
),
),
I added a SingleChildScrollView with horizontal scroll to avoid problems
To send the list of users to another page, just do :
Navigator.push(
context,
MaterialPageRoute(
settings: const RouteSettings(name: "no-route"),
builder: (context) => OtherPage(userList: userList),
),
);
class OtherPage extends StatelessWidget {
final List<User> userList;
const OtherPage({
required this.userList,
Key? key,
}) : super(key: key);
@override
Widget build(BuildContext context) {
return Scaffold(
body: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: userList
.map(
(user) => Text("Name: ${user.name}, Age: ${user.age} "))
.toList(),
),
);
}
}
|
76380806 | 76381408 | I'm attempting to call a callable cloud function (which is already deployed) from a client app and getting this error on the GCP logs:
{
httpRequest: {9}
insertId: "647865c20002422d2d32b259"
labels: {1}
logName: "projects/faker-app-flutter-firebase-dev/logs/run.googleapis.com%2Frequests"
receiveTimestamp: "2023-06-01T09:32:50.154902339Z"
resource: {2}
severity: "WARNING"
spanId: "11982344486849947204"
textPayload: "The request was not authorized to invoke this service. Read more at https://cloud.google.com/run/docs/securing/authenticating Additional troubleshooting documentation can be found at: https://cloud.google.com/run/docs/troubleshooting#401"
timestamp: "2023-06-01T09:32:50.138090Z"
trace: "projects/faker-app-flutter-firebase-dev/traces/ddcb5a4df500af085b7a7f6f89a72ace"
traceSampled: true
}
The same function works correctly from the Firebase Local Emulator, so I assume this is a permissions issue related to IAM and service accounts (I still don't understand too well how IAM works).
Here is my code:
import * as admin from "firebase-admin"
import * as functions from "firebase-functions/v2"
import * as logger from "firebase-functions/logger";
// https://github.com/firebase/firebase-tools/issues/1532
if (admin.apps.length === 0) {
admin.initializeApp()
}
export const deleteAllUserJobs = functions.https.onCall(async (context: functions.https.CallableRequest) => {
const uid = context.auth?.uid
if (uid === undefined) {
throw new functions.https.HttpsError("unauthenticated", "You need to be authenticated to perform this action")
}
const firestore = admin.firestore()
const collectionRef = firestore.collection(`/users/${uid}/jobs`)
const collection = await collectionRef.get()
logger.debug(`Deleting ${collection.docs.length} docs at "/users/${uid}/jobs"`)
// transaction version
await firestore.runTransaction(async (transaction) => {
for (const doc of collection.docs) {
transaction.delete(firestore.doc(`/users/${uid}/jobs/${doc.id}`))
}
})
logger.debug(`Deleted ${collection.docs.length} docs at "/users/${uid}/jobs"`)
return {"success": true}
})
The function was deployed with firebase deploy --only functions, and I made sure the client app calls this function when the user is already authorized.
According to the docs:
If you encounter permissions errors when deploying functions, make sure that the appropriate IAM roles are assigned to the user running the deployment commands.
The docs also link to this page, which says:
Cloud Functions for Firebase permissions
For a list and descriptions of Cloud Functions permissions, refer to
the IAM documentation.
Be aware that the deployment of functions requires a specific
configuration of permissions that aren't included in the standard
Firebase predefined roles. To deploy functions, use one of the
following options:
Delegate the deployment of functions to a project Owner.
If you're deploying only non-HTTP functions, then a project Editor can deploy your functions.
Delegate deployment of functions to a project member who has the following two roles:
Cloud Functions Admin role (roles/cloudfunctions.admin)
Service Account User role (roles/iam.serviceAccountUser)
A project Owner can assign these roles to a project member using the Google Cloud Console or gcloud CLI. For detailed steps and
security implications for this role configuration, refer to the IAM
documentation.
But like I said, I can successfully deploy the function. It's when I try to execute it that I get an error log.
In summary, what I'm trying to do is quite basic:
write a callable cloud function
deploy it
call it from the client app
When the function runs, it fails with the error above.
Any advice? Do I need to set a specific IAM role?
| Firebase Cloud Functions V2: The request was not authorized to invoke this service | Open https://console.cloud.google.com/iam-admin/<project_name>, find the service account you are using in your firebase project and add the Rol "Cloud Functions Invoker".
Is like Admin, Editor or Viewer roles are about manipulating the function on GCP (don't allow you to use it) and Invoker allows that account to invoke the function.
|
76384708 | 76384807 | I'm using NiFi 1.21.0 and nifi-marklogic-nar-1.9.1.6.
I have been using PutMarkLogic 1.9.1.6 processor to ingest documents to MarkLogic-db for more than 2 years. Recently witnessed that the processor doesn't support adding document-quality (PFB the processor image).
So I have created a new issue against marklogic/nifi project.
| How to set document quality while ingesting document into MarkLogic through PutMarkLogic NiFi processor? | This enhancement was fixed as part of https://github.com/marklogic/nifi/pull/121. Therefore use nifi-marklogic-nar-1.15.3.1 or later version to set document-quality. I'm currently using PutMarkLogic 1.16.3.2 and I can now see a provision to add Quality.
|
76381224 | 76381430 | I have used the below code to Loop through selection on outlook and convert into Hyperlinks and change Text To Display Link.
it works but it adds the the ascending number incrementally to all cells like this picture:
My need is to add the ascending number per each row like this picture:
In advance, great thanks for all your help.
Sub Hyperlink_Outlook()
Dim wDoc As Word.Document, rngSel As Word.Selection, cel As Cell, i As Long
Set wDoc = Application.ActiveInspector.WordEditor
Set rngSel = wDoc.Windows(1).Selection
If Not rngSel Is Nothing And rngSel.Information(wdWithInTable) Then
If rngSel.Range.Cells.Count > 0 Then
For Each cel In rngSel.Cells
If Len(cel.Range.Text) > 10 Then
i = i + 1
wDoc.Hyperlinks.Add cel.Range, _
Address:=Left(cel.Range.Text, Len(cel.Range.Text) - 1), _
TextToDisplay:="Attachment " & i
End If
Next
End If
End If
End Sub
| Loop through rows of a table on outlook and change (Text To Display) to an ascending number per each row | Try looping through rows first (the following is not tested):
Sub Hyperlink_Outlook()
Dim wDoc As Word.Document, rngSel As Word.Selection, cel As Cell, i As Long
Dim r As Variant
Set wDoc = Application.ActiveInspector.WordEditor
Set rngSel = wDoc.Windows(1).Selection
If Not rngSel Is Nothing And rngSel.Information(wdWithInTable) Then
If rngSel.Range.Cells.Count > 0 Then
For Each r In rngSel.Rows
i = 0 ' reset i here
For Each cel In r.Cells
If Len(cel.Range.Text) > 10 Then
i = i + 1
wDoc.Hyperlinks.Add cel.Range, _
Address:=Left(cel.Range.Text, Len(cel.Range.Text) - 1), _
TextToDisplay:="Attachment " & i
End If
Next cel
Next r
End If
End If
End Sub
|
76382642 | 76382822 | I have df which has 5 columns. A column named date which has minute-wise data of a few days but the data start at 9:15 and ends at 15:29. And then there are four other columns which are named first, max, min, and last which have numerical numbers in them.
I wrote a code that uses x mins as a variable. It resamples the rows and gives rows of x minutes.
The first of resampled will be the 'first' of first row.
The 'last' of resampled will be the 'last' of the last row.
The max of resampled will be the highest of all the rows of the max column.
The low of resampled will be low of all the rows for the low column.
And the date will have datetime of x minutes intervals.
My problem is for some minutes the code is working perfectly. But for other minutes I am getting the wrong time as the first row.
Instead of resampled data starting from 9:15. It starts with some other minute.
Code:
def resample_df(df, x_minutes = '15T'):
df.set_index('date', inplace=True)
resampled_df = df.resample(x_minutes).agg({
'first': 'first',
'max': 'max',
'min': 'min',
'last': 'last'
})
resampled_df.reset_index(inplace=True)
return resampled_df
Input:
date first max min last
0 2023-06-01 09:15:00 0.014657 0.966861 0.556195 0.903073
1 2023-06-01 09:16:00 0.255174 0.607714 0.845804 0.039933
2 2023-06-01 09:17:00 0.956839 0.881803 0.876322 0.552568
Output: when x_minutes = '6T'
date first max min last
0 2023-06-01 09:12:00 0.014657 0.966861 0.556195 0.552568
1 2023-06-01 09:18:00 0.437867 0.988005 0.162957 0.897419
2 2023-06-01 09:24:00 0.296486 0.370957 0.013994 0.108506
The data shows 9:12 but I don't have 9:12. Why is it giving me the wrong data?
Note: It works prefectly when minutes entered are odd. e.g. x_minutes = '15T'
Code to create a dummy df:
import pandas as pd
import random
from datetime import datetime, timedelta
# Define the number of days for which data is generated
num_days = 5
# Define the start and end times for each day
start_time = datetime.strptime('09:15', '%H:%M').time()
end_time = datetime.strptime('15:30', '%H:%M').time()
# Create a list of all the timestamps for the specified days
timestamps = []
current_date = datetime.now().replace(hour=start_time.hour, minute=start_time.minute, second=0, microsecond=0)
end_date = current_date + timedelta(days=num_days)
while current_date < end_date:
current_time = current_date.time()
if start_time <= current_time <= end_time:
timestamps.append(current_date)
current_date += timedelta(minutes=1)
# Generate random data for each column
data = {
'date': timestamps,
'first': [random.random() for _ in range(len(timestamps))],
'max': [random.random() for _ in range(len(timestamps))],
'min': [random.random() for _ in range(len(timestamps))],
'last': [random.random() for _ in range(len(timestamps))]
}
# Create the DataFrame
df = pd.DataFrame(data)
# Display the resulting DataFrame
display(df)
| Resampling Rows minute wise not working in for Even Minutes in Python DataFrame | Use:
resampled_df = df.resample(x_minutes, origin = 'start').agg({
'first': 'first',
'max': 'max',
'min': 'min',
'last': 'last'
})
|
76384685 | 76384818 | If i have a toggle, which updates its state from external async load but also by user intput, how can i differentiate those two? eg. to perform a special action on user action
Group {
Toggle(isOn: $on) {
EmptyView()
}
}
.onChange(of: on) { newValue in
was "on" changed by user or onAppear async update?
}
.onAppear {
async update on
}
PS: this is mostly for macOS, and there the tapGesture on Toggle doesn't work
| SwiftUI Toggle how to distinguish changing value by UI action vs changing programatically | If you want a side effect for use the user actions, you can use a custom wrapper Binding:
struct ContentView: View {
@State private var on: Bool = false
var userManagedOn: Binding<Bool> {
.init {
return on
} set: { newValue in
print("Side effect")
on = newValue
}
}
var body: some View {
VStack {
Group {
Toggle(isOn: userManagedOn) {
EmptyView()
}
}
}
.padding()
.onAppear {
Task { @MainActor in
try? await Task.sleep(nanoseconds: NSEC_PER_SEC)
on.toggle()
}
}
}
}
|
76380987 | 76381434 | <input type="checkbox" id="darkmode-toggle" class="peer invisible h-0 w-0" />
<label for="darkmode-toggle" class="btn-toggle group">
<svg class="icon absolute">
<use href="../../assets/icons/spirit.svg#sun" />
</svg>
<svg class="icon absolute group-[peer-checked]:fill-secondary-dark-300">
<use href="../../assets/icons/spirit.svg#moon" />
</svg>
</label>
In the given HTML code (using Tailwind CSS), I want to change the color of the icon when the associated input checkbox is checked.
The attempted selector used is group-[peer-checked]:fill-secondary-dark-300. However, the desired icon color change is not happening.
How can I achieve the desired result of changing the color of the icon when the input checkbox is checked using the provided selector?
This is what I want to achieve with this selector:
group (select the parent "label")
[peer-checked] (when the sibling of the label "input" is checked)
fill-secondary-dark-300: change the icon color.
| How can I change the color of an icon when a checkbox is checked using Tailwind? | You could consider using group-[.peer:checked+&]::
tailwind.config = {
theme: {
extend: {
colors: {
'secondary-dark-300': 'red',
},
},
},
};
<script src="https://cdn.tailwindcss.com"></script>
<input type="checkbox" id="darkmode-toggle" class="peer invisible h-0 w-0" />
<label for="darkmode-toggle" class="btn-toggle group">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="currentColor" class="w-6 h-6">
<path d="M12 2.25a.75.75 0 01.75.75v2.25a.75.75 0 01-1.5 0V3a.75.75 0 01.75-.75zM7.5 12a4.5 4.5 0 119 0 4.5 4.5 0 01-9 0zM18.894 6.166a.75.75 0 00-1.06-1.06l-1.591 1.59a.75.75 0 101.06 1.061l1.591-1.59zM21.75 12a.75.75 0 01-.75.75h-2.25a.75.75 0 010-1.5H21a.75.75 0 01.75.75zM17.834 18.894a.75.75 0 001.06-1.06l-1.59-1.591a.75.75 0 10-1.061 1.06l1.59 1.591zM12 18a.75.75 0 01.75.75V21a.75.75 0 01-1.5 0v-2.25A.75.75 0 0112 18zM7.758 17.303a.75.75 0 00-1.061-1.06l-1.591 1.59a.75.75 0 001.06 1.061l1.591-1.59zM6 12a.75.75 0 01-.75.75H3a.75.75 0 010-1.5h2.25A.75.75 0 016 12zM6.697 7.757a.75.75 0 001.06-1.06l-1.59-1.591a.75.75 0 00-1.061 1.06l1.59 1.591z" />
</svg>
<svg class="w-6 h-6 group-[.peer:checked+&]:fill-secondary-dark-300" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="currentColor">
<path fill-rule="evenodd" d="M9.528 1.718a.75.75 0 01.162.819A8.97 8.97 0 009 6a9 9 0 009 9 8.97 8.97 0 003.463-.69.75.75 0 01.981.98 10.503 10.503 0 01-9.694 6.46c-5.799 0-10.5-4.701-10.5-10.5 0-4.368 2.667-8.112 6.46-9.694a.75.75 0 01.818.162z" clip-rule="evenodd" />
</svg>
</label>
|
76380618 | 76381442 | In onblur I need to call alert(), but this doesn't work in Chrome and Firefox. Sess https://jsfiddle.net/mimomade/5sur482w/1/
In Firefox :focus-visible stays after leaving the 2nd and 4th input field and is not removed.
In Chrome I can't leave the 2nd input field. Although the 1st doesn't has any problem.
| Javascript - alert problem with onblur and focus-visible Firefox/Chrome | At the very bottom is the code with both bugs fixed. You're initial JavaScript looks like this:
// Has different bugs in Firefox and Chrome.
function blurring(el) {
console.log(el.id + ' pre alert');
alert('blurring ' + el.id);
console.log(el.id + ' post alert');
}
In Firefox, your apparent bug actually masks a bug similar to what you're encountering in Chrome. When the alert is removed, the code has the intended behavior, so alert and the event are interacting in a weird way. In this specific case, to get around this, we can just wait for the event to finish by wrapping the function in a zero-millisecond timeout.
// Has a similar bug in both browsers.
function blurring(el) {
console.log(el.id + ' pre alert');
setTimeout(function () {
alert('blurring ' + el.id);
console.log(el.id + ' post alert');
}, 0);
}
In Chrome, your bug appears to be caused by the blur event emitting each time the alert box is closed. Luckily, because we wait for the events to finish, the active element should be the element newly selected input instead of whatever the browser set it to. This means checking ensuring el and document.activeElement are different is sufficient to fix this bug.
// addresses both bugs.
function blurring(el) {
console.log(el.id + ' pre alert');
setTimeout(function () {
if (document.activeElement !== el) {
alert('blurring ' + el.id);
console.log(el.id + ' post alert');
}
}, 0);
}
|
76382726 | 76382845 | I'm pretty new to TypeScript, as well as using the T3 stack (React Query / Tanstack Query). I'm trying to type companyId as string, so that I don't have to type companyId as string every time I use it later on it in the code, but I can't figure out how to best to that or what the best practice is with this stack... I'm used to plain old JavaScript and useEffects to do API calls (and probably writing worse code).
Note: the following code exists at /pages/companies/[id].tsx
The following is my first attempt, but I get a "Rendered more hooks than during the previous render" error at "const { data: company} ...", which makes sense:
const CompanyPage: NextPage = () => {
const router = useRouter()
const companyId = router.query.id
if (!companyId || Array.isArray(companyId)) return <div>Loading...</div> // have to check for Array.isArray because of NextJS/Typescript bug
const { data: company } = api.companies.getSingleById.useQuery({companyId: companyId});
if (!company ) return <div>Loading...</div>
...
return (...)
I tried doing the following, but I was not allowed because the type for companyId from router.query.id is string | string[] | undefined.
const CompanyPage: NextPage = () => {
const router = useRouter()
const companyId: string = router.query.id // Type 'string | string[] | undefined' is not assignable to type 'string'
const { data: company } = api.companies.getSingleById.useQuery({companyId: companyId});
if (!company ) return <div>Loading...</div>
...
return (...)
UPDATE:
I changed it to the following now, which seems to work, but it doesn't feel quite right it's the correct way to do things. (With this method, I only have to write companyId as string once, which is fine.)
const CompanyPage: NextPage = () => {
const router = useRouter()
const companyId = router.query.id
const { data: company } = api.companies.getSingleById.useQuery({companyId: companyId as string});
if (!companyId || Array.isArray(companyId)) return <div>Loading...</div> // have to check for Array.isArray because of NextJS/Typescript bug
if (!company ) return <div>Loading...</div>
...
return (...)
ANSWER:
Thank you to Fabio for the accepted answer.
I'm destructuring router.query into multiple variables on other routes, so this is an example of doing that based on the accepted answer:
const { companyId, locationId } = useMemo(() => ({
companyId: router.query?.companyId?.toString() ?? "",
locationId: router.query?.locationId?.toString() ?? "",
}), [router.query?.companyId, router.query?.locationId]);
| How to type NextJS router.query.id as string? | You can use optional chaining and nullish coalescing in combination with the query params' toString() method, heres an example based on your code snippet:
const CompanyPage: NextPage = () => {
const router = useRouter();
// this result will be of type string
const companyId = useMemo(() => {
return router.query?.id?.toString?.() ?? "";
}, [router.query?.id]);
const { data: company } = api.companies.getSingleById.useQuery({
companyId: companyId,
});
if (!company) return <div>Loading...</div>;
return; // ...
};
The reason why the query parameters are of type string | string[] is because if you had a catch-all segment each of the url parts will be split by / and presented to you as an array.
Example
If you had the route /some-segment/[...href] and the URL would be /some-segment/200/my-post following would apply:
The contents of the href query variable would be ["200", "my-post"]
Calling href?.toString?.() would return you /some-segment/200/my-post.
|
76384713 | 76384822 | I was trying to use a state variables value inside another state variable, but when the other sub state variable was updated outside, its updated value didn't get reflected in parent state variable, why did this happened? can I not use state variable inside another state variable or is there any specific way to use it?
Can anyone explain this please?
const [noOfPersonstForRoom, setNoOfPersonsForRoom] = useState([2]);
const [ageOfPerson1ForRoom, setAgeOfPerson1ForRoom] = useState([0]);
const [ageOfPerson2ForRoom, setAgeOfPerson2ForRoom] = useState([0]);
const [rooms, setRooms] = useState([
{
roomNo: 1,
noOfPersons: noOfPersonsForRoom[0],
ageOfPerson1: ageOfPerson1ForRoom[0],
ageOfPerson2: ageOfPerson2ForRoom[0]
},
]);
This is code of what I tried, with some changes.
To change sub state variable, I used following function, and it was called on onChange() of an input field:
const changeNoOfPersonsForRoom = (e, index) => {
let newNoOfPersonsForRoom = e.target.value;
setNoOfPersonsForRoom([
...noOfPersonsForRoom.slice(0, index),
e.target.value,
...noOfPersonsForRoom.slice(index + 1),
]);
| Tried to use state variable inside another state variable, but it didn't work as expected it to, Why ? (in ReactJS using function component) | Could you add how you update your state(s)?
But in general, states are not bound or connected to each other.
If you change one state, it won't update any other state.
const [rooms, setRooms] = useState([
{
roomNo: 1,
noOfPersons: noOfPersonsForRoom[0],
ageOfPerson1: ageOfPerson1ForRoom[0],
ageOfPerson2: ageOfPerson2ForRoom[0]
},
Given your example, you just set the initial state of rooms with the values of your previous states. Nothing more. If you need to update several states, you have to update each of them separately.
|
76382432 | 76382854 | I am trying to write code in R for a dataset to check if DAYS column have consecutive numbers and print out the missing DAYS number, in such a way that, if the count of missing consecutive numbers between two rows of the DAYS column equals to that count+1 in the corresponding last row of the PERIOD column, exclude it from the output. For example, consider the two rows in DAYS column 163 and 165, where the count of missing number is 1. But in this case, the last row (where DAYS is 165) has PERIOD value of 2, that is (count+1). So, exclude this missing value (164) from the output. However if you look at DAYS 170 and 172,y you can see 172 has PERIOD value of 1 (not 2 or count+1). So, show this output (171).
Here is the first 28 rows of the dataset.
DAYS PERIOD
146 1
147 1
148 1
149 1
150 1
151 1
152 1
153 1
154 1
155 1
156 1
157 1
158 1
159 1
160 1
161 1
162 1
163 1
165 2
166 1
167 1
168 1
169 1
170 1
172 1
173 1
174 1
175 1
I tried
First, created a sequence of expected DAYS values
expected_days <- seq(min(hs$DAYS), max(hs$DAYS))
Then, find the missing DAYS values
missing_days <- setdiff(expected_days, hs$DAYS)
How to do the next bit?
| What is the best way to check for consecutive missing values in a data column in R and exclude them based on a related column value? | I've managed to do this using tidyverse tools:
Set up example data
I've tweaked your data slightly to show that the solution can handle longer runs of missing days.
library(vroom)
library(dplyr)
library(tidyr)
test <-
vroom(
I(
"days period
161 1
162 1
163 1
166 3
167 1
168 1
169 1
170 1
172 1
"),
col_types = c("ii"))
Add 'empty' days explicitly to data frame
all_days <- min(test[["days"]]):max(test[["days"]])
frame <- tibble(days = all_days)
test <-
right_join(test, frame, by = "days") |>
arrange(days)
test
#> # A tibble: 12 × 2
#> days period
#> <int> <int>
#> 1 161 1
#> 2 162 1
#> 3 163 1
#> 4 164 NA
#> 5 165 NA
#> 6 166 3
#> 7 167 1
#> 8 168 1
#> 9 169 1
#> 10 170 1
#> 11 171 NA
#> 12 172 1
Find the number of consecutive missing days
test <-
mutate(test,
no_na = xor(is.na(period), is.na(lag(period))),
missingness_group = cumsum(no_na)) |>
select(-no_na)
test <-
group_by(test, missingness_group) |>
mutate(missing_days =
case_when(
all(is.na(period)) ~ n(),
TRUE ~ 0)) |>
ungroup() |>
select(-missingness_group)
test
#> # A tibble: 12 × 3
#> days period missing_days
#> <int> <int> <dbl>
#> 1 161 1 0
#> 2 162 1 0
#> 3 163 1 0
#> 4 164 NA 2
#> 5 165 NA 2
#> 6 166 3 0
#> 7 167 1 0
#> 8 168 1 0
#> 9 169 1 0
#> 10 170 1 0
#> 11 171 NA 1
#> 12 172 1 0
Remove rows where days are all accounted for
test <- mutate(test, extra_days = period - 1)
test <- fill(test, extra_days, .direction = "up")
test <-
filter(test, !is.na(period) | missing_days > extra_days) |>
select(days, period)
test
#> # A tibble: 10 × 2
#> days period
#> <int> <int>
#> 1 161 1
#> 2 162 1
#> 3 163 1
#> 4 166 3
#> 5 167 1
#> 6 168 1
#> 7 169 1
#> 8 170 1
#> 9 171 NA
#> 10 172 1
Created on 2023-06-01 with reprex v2.0.2
|
76380830 | 76381444 | I have an entity Person
@Entity
@Data
public class Person {
@Temporal(TemporalType.DATE)
private Calendar dob;
}
And some dao classes
@Data
public class PersonResponse {
@JsonFormat(pattern = "yyyy-MM-dd")
private Calendar dob;
}
@Data
public class PersonRequest{
@DateTimeFormat(pattern = "yyyy-MM-dd")
private Calendar dob;
}
When storing values it works perfectly. Example if I send "2000-01-01" it's stored as is in the database "2000-01-01". But When I try to return it I get "1999-12-31".
Now it's clear that is a Timezone Problem but I don't know how to fix it.
My explanation for the cause
The user timezone is GMT+1 so it is some how retrieved as "2000-01-01T00:00:00.000 +01:00", then parsed to UTC "1999-12-31T23:00:00.000 +00:00" to finally be returned as "1999-12-31".
But why? And how can I prevent this knowing that users timezones can change (so adding the time offset manually of 1 hour won't work).
I tried also changing type from Calendar to java.util.Date and java.sql.Date... but no result.
Similar questions where asked before like this one but I still couldn't understand how to fix it
| Spring Boot Wrong date returned | If Applicable try to switch from class Calendar to LocalDate. LocalDate does not take time zone into consideration. This should resolve your issue (and simplify your code). Also, for formatting the LocalDate with Json see the answer to this question: Spring Data JPA - ZonedDateTime format for json serialization
|
76384705 | 76384834 | I am trying to monkey patch a missing import. The old_invoke() still does not get the import.
In case it is relevant, MyClass is a gdb.Command.
(gdb) pi
>>> import mymodule
>>> old_invoke = mymodule.MyClass.invoke
>>> def new_invoke(self, *k, **kw):
... print("New invoke")
... import getopt
... old_invoke(self, *k, **kw)
...
>>> mymodule.MyClass.invoke = new_invoke
>>>
(gdb) my_gdb_command
New invoke
Python Exception <class 'NameError'> name 'getopt' is not defined:
Error occurred in Python: name 'getopt' is not defined
Also, in case it is relevant, the initial files and sourcing looks something like this:
mymodule.py:
import gdb
class MyClass(gdb.Command):
...
def invoke(self, arg, from_tty):
options, remainder = getopt.getopt(args, 'p:s:t:o:')
...
MyClass()
myothermodule.py:
import mymodule
...
Sourcing the above
(gdb) source myothermodule.py
| Python: Fix missing import with a monkey patch | old_invoke is trying to reference mymodule's getopt, which doesn't exist. You need:
>>> import mymodule
>>> old_invoke = mymodule.MyClass.invoke
>>> def new_invoke(self, *k, **kw):
... print("New invoke")
... import getopt
...
... # here
... mymodule.getopt = getopt
...
... old_invoke(self, *k, **kw)
...
>>> mymodule.MyClass.invoke = new_invoke
But, realistically, you should just have an import getopt in mymodule:
# mymodule.py
import getopt
...
Then your function is simply:
>>> import mymodule
>>> old_invoke = mymodule.MyClass.invoke
>>> def new_invoke(self, *k, **kw):
... print("New invoke")
... old_invoke(self, *k, **kw)
...
>>> mymodule.MyClass.invoke = new_invoke
Addendum
As another note, using import in a function isn't generally recommended. Unless you are only calling this function once (and even then, why?), every time you call the function you are attempting to load a module, which at best will always do a check against sys.modules when you probably don't have to.
Have the import getopt at the top of the script:
import getopt
import mymodule
mymodule.getopt = getopt
Which is where you'd probably expect this to be anyways
|
76382691 | 76382862 | I am trying to create an image of Windows with additional things. My question is whether it is possible to include a specific volume when creating the container. For example, I would like to do:
docker run --name container -v shared:c:\shared -it mcr.microsoft.com/windows/servercore:20H2-amd64 powershell
There I am accessing the shared volume, but I want to do this in the dockerfile as a command.
I want something like this after running the container:
Thank you for the help
I tried to use the VOLUME command but I don't know if I am doing it right or it's not for what I am trying.
| Adding a volume in dockerfile | Using VOLUME in dockerfile does not mount the volume during build, this only specifies a target where a directory can be mounted during container runtime (anonymous volume).
Because image build and container run can happen on different machines, so having VOLUME source defined in dockerfile (buid time) does not make sense.
|
76381353 | 76381491 | I want to implement brush cursor like in most image editors when cursor is a circle that change its size according to brush size. I've read the docs and found only setShape method, but no setSize. Is it possible in Qt to change cursor size?
| How to change cursor size in PyQt5? | pixmap = QPixmap("image.png") # Replace with the path to your custom cursor image, brush in your case
pixmap = pixmap.scaled(30, 30) # Set the desired size
cursor = QCursor(pixmap)
self.setCursor(cursor)
you can change the size and the "form" of your cursor in PyQt5 by creating a pixmap and then assigning in to your cursor
|
76384330 | 76384840 | Encoding honestly continues to confuse me, so hopefully this isn't a totally daft question.
I have a python script that calls metaflac to compare the flac fingerprints in a file to the flac fingerprints of a file. Recently I came across files with » (https://bytetool.web.app/en/ascii/code/0xbb/) in the file name. This failed with how I was dealing with the file name strings, so I'm trying to work around that. My first thought was that I needed to deal with this as bytes objects. But when I do that and then call subprocess.run, I get a UnicodeDecodeError
Here's the snippet of code that is give me errors:
def test():
directory = b'<redacted>'
ffp_open = open(directory + b'<redacted>.ffp','rb')
ffp_lines = ffp_open.readlines()
print(ffp_lines)
for line in ffp_lines:
if not line.startswith(b';') and b':' in line:
txt = line.split(b':')
ffp_cmd = b'/usr/bin/metaflac --show-md5sum \'' + directory + b'/' + txt[0]+ b'\''
print(ffp_cmd)
get_ffp_process = subprocess.run(ffp_cmd, stdout=PIPE, stderr=PIPE, universal_newlines=True,shell=True)
With that, I get the following output (shortened to make more sense):
[b'01 - Intro.flac:eee7ca01db887168ce8312e7a3bdf8d6\r\n', b'04 - Song title \xbb Other Song \xbb.flac:98d2d03f47790d234052c6c9a2ca5cfd\r\n']
b"/usr/bin/metaflac --show-md5sum '<redacted>/01 - Intro.flac'"
b"/usr/bin/metaflac --show-md5sum '<redacted>/04 - Song title \xbb Other Song \xbb.flac'"
get_ffp_process = subprocess.run(ffp_cmd, stdout=PIPE, stderr=PIPE, universal_newlines=True,shell=True)
File "<redacted>/python/lib/python3.9/subprocess.py", line 507, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "<redacted>/python/lib/python3.9/subprocess.py", line 1134, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "<redacted>/python/lib/python3.9/subprocess.py", line 2021, in _communicate
stderr = self._translate_newlines(stderr,
File "<redacted>/python/lib/python3.9/subprocess.py", line 1011, in _translate_newlines
data = data.decode(encoding, errors)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xbb in position 85: invalid start byte
If I run this directly on the command line it works just fine (using tabs to fill in the file name):
metaflac --show-md5sum 04\ -\ Song\ title\ »\ Other Song\ ».flac
98d2d03f47790d234052c6c9a2ca5cfd
The FFP file, through nano, looks like this:
01 - Intro.flac:eee7ca01db887168ce8312e7a3bdf8d6
04 - Song title � Other Song �.flac:98d2d03f47790d234052c6c9a2ca5cfd
I have no control over the file names, so I'm trying to be as flexible as possible to handle them, which is why I thought a bytes object would be best. I'd appreciate any direction. Thanks!
| subprocess.run command with non-utf-8 characters (UnicodeDecodeError: 'utf-8' codec can't decode byte 0xbb) | I believe coding of "latin1" or "cp1252" will do decode that successfully. Also, it is easier to deal with strings than with bytes, so here is my suggestion:
import pathlib
import subprocess
directory = pathlib.Path("/tmp")
with open(directory / "data.ffp", "r", encoding="latin1") as stream:
for line in stream:
if line.startswith(";"):
continue
if ":" not in line:
continue
file_name, expected_md5sum = line.strip().split(":")
print(f"{name=}")
print(f"{expected_md5sum=}")
command = [
"/usr/bin/metaflac",
"--show-md5sum",
str(directory / file_name)
]
print(f"{command=}")
# Now you can run the command. I assume that the command will return a MD5 sum back.
completed_process = subprocess.run(
command,
encoding="latin1",
capture_output=True,
)
# Now, completed_process.stdout will hold the output
# as a string, not bytes.
Here is a sample output:
name='04 - Song title » Other Song ».flac'
expected_md5sum='eee7ca01db887168ce8312e7a3bdf8d6\n'
command=['/usr/bin/metaflac', '--show-md5sum', '/tmp/01 - Intro.flac']
name='04 - Song title » Other Song ».flac'
expected_md5sum='98d2d03f47790d234052c6c9a2ca5cfd\n'
command=['/usr/bin/metaflac', '--show-md5sum', '/tmp/04 - Song title » Other Song ».flac']
Since my system does not have the metaflac command, I cannot test it. Please forgive any error that come up. If an error found, please post in the comment and I will try to fix it.
|
76382379 | 76382881 | There are several StackOverflow posts about situation where t.test() in R produce an error saying "data are essentially constant", this is due to that there is not enough difference between the groups (there is no variation) to run the t.test(). (Correct me if there is something else)
I'm in this situation, and I would like to fix this buy altering my data the way the statistical features of the data don't change drastically, so the t-test stays correct. I was wondering what if I add some very little variation to the data (e.g. change 0.301029995663981 to 0.301029995663990), or what else can I do?
For example, this is my data:
# Create the data frame
data <- data.frame(Date = c("2021.08","2021.08","2021.09","2021.09","2021.09","2021.10","2021.10","2021.10","2021.11","2021.11","2021.11","2021.11","2021.11","2021.12","2021.12","2022.01","2022.01","2022.01","2022.01","2022.08","2022.08","2022.08","2022.08","2022.08","2022.09","2022.09","2022.10","2022.10","2022.10","2022.11","2022.11","2022.11","2022.11","2022.11","2022.12","2022.12","2022.12","2022.12","2023.01","2023.01","2023.01","2023.01","2021.08","2021.08","2021.09","2021.09","2021.09","2021.10","2021.10","2021.10","2021.11","2021.11","2021.11","2021.11","2021.11","2021.12","2021.12","2022.01","2022.01","2022.01","2022.01","2022.08","2022.08","2022.08","2022.08","2022.08","2022.09","2022.09","2022.09","2022.09","2022.10","2022.10","2022.10","2022.10","2022.11","2022.11","2022.11","2022.11","2022.11","2022.12","2022.12","2022.12","2022.12","2023.01","2023.01","2023.01","2023.01"),
Species = c("A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A",
"A","A","A","A","A","A","A","A","A","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B",
"B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B"),
Site = c("Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something",
"Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something",
"Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something",
"Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something",
"Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something",
"Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something",
"Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something",
"Something","Something","Something","Something"),
Mean = c("0.301029995663981","1.07918124604762","0.698970004336019","1.23044892137827","1.53147891704226","1.41497334797082","1.7160033436348",
"0.698970004336019","1.39794000867204","1","0.301029995663981","0.301029995663981","0.477121254719662","0.301029995663981","0.301029995663981",
"0.301029995663981","0.477121254719662","0.301029995663981","0.301029995663981","0.845098040014257","0.301029995663981","0.301029995663981",
"0.477121254719662","0.698970004336019","1.23044892137827","1.41497334797082","1.95904139232109","1.5910646070265","1.53147891704226",
"1.14612803567824","1.57978359661681","1.34242268082221","0.778151250383644","0.301029995663981","0.301029995663981","0.477121254719662",
"0.301029995663981","1.20411998265592","0.845098040014257","1.17609125905568","1.20411998265592","0.698970004336019","0.301029995663981",
"0.698970004336019","0.698970004336019","0.903089986991944","1.14612803567824","0.301029995663981","0.602059991327962","0.301029995663981",
"0.845098040014257","0.698970004336019","0.698970004336019","0.301029995663981","0.698970004336019","0.301029995663981","0.301029995663981",
"0.301029995663981","0.477121254719662","0.301029995663981","0.301029995663981","0.301029995663981","0.301029995663981","0.301029995663981",
"0.602059991327962","0.301029995663981","0.845098040014257","1.92941892571429","1.27875360095283","0.698970004336019","1.38021124171161",
"1.20411998265592","1.38021124171161","1.14612803567824","1","1.07918124604762","1.17609125905568","0.845098040014257","0.698970004336019",
"0.778151250383644","0.301029995663981","0.845098040014257","1.64345267648619","1.46239799789896","1.34242268082221","1.34242268082221",
"0.778151250383644"))
After, I set the factors:
# Set factors
str(data)
data$Date<-as.factor(data$Date)
data$Site<-as.factor(data$Site)
data$Species<-as.factor(data$Species)
data$Mean<-as.numeric(data$Mean)
str(data)
When I try t.test():
compare_means(Mean ~ Species, data = data, group.b = "Date", method = "t.test")
This is the error:
Error in `mutate()`:
ℹ In argument: `p = purrr::map(...)`.
Caused by error in `purrr::map()`:
ℹ In index: 5.
ℹ With name: Date.2021.12.
Caused by error in `t.test.default()`:
! data are essentially constant
Run `rlang::last_trace()` to see where the error occurred.
Similarly, when I use this in ggplot:
ggplot(data, aes(x = Date, y = Mean, fill=Species)) +
geom_boxplot()+
stat_compare_means(data=data,method="t.test", label = "p.signif") +
theme(axis.text.x = element_text(angle = 90, vjust = 0.5, hjust=1))
Warning message:
Computation failed in `stat_compare_means()`
Caused by error in `mutate()`:
ℹ In argument: `p = purrr::map(...)`.
Caused by error in `purrr::map()`:
ℹ In index: 5.
ℹ With name: x.5.
Caused by error in `t.test.default()`:
! data are essentially constant
What is the best solution, which keeps the data still usable in t-test?
| Optimize data for t.test to avoid "data are essentially constant" error | Finding the sd of Mean for each Date-Species combination and then filtering out any Dates where any sd is 0 will do the trick. You could even just pipe the filtered data to compare_means():
library(dplyr)
library(ggpubr)
data <- data.frame(Date = c("2021.08","2021.08","2021.09","2021.09","2021.09","2021.10","2021.10","2021.10","2021.11","2021.11","2021.11","2021.11","2021.11","2021.12","2021.12","2022.01","2022.01","2022.01","2022.01","2022.08","2022.08","2022.08","2022.08","2022.08","2022.09","2022.09","2022.10","2022.10","2022.10","2022.11","2022.11","2022.11","2022.11","2022.11","2022.12","2022.12","2022.12","2022.12","2023.01","2023.01","2023.01","2023.01","2021.08","2021.08","2021.09","2021.09","2021.09","2021.10","2021.10","2021.10","2021.11","2021.11","2021.11","2021.11","2021.11","2021.12","2021.12","2022.01","2022.01","2022.01","2022.01","2022.08","2022.08","2022.08","2022.08","2022.08","2022.09","2022.09","2022.09","2022.09","2022.10","2022.10","2022.10","2022.10","2022.11","2022.11","2022.11","2022.11","2022.11","2022.12","2022.12","2022.12","2022.12","2023.01","2023.01","2023.01","2023.01"),
Species = c("A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A","A",
"A","A","A","A","A","A","A","A","A","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B",
"B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B","B"),
Site = c("Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something",
"Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something",
"Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something",
"Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something",
"Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something",
"Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something",
"Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something","Something",
"Something","Something","Something","Something"),
Mean = c("0.301029995663981","1.07918124604762","0.698970004336019","1.23044892137827","1.53147891704226","1.41497334797082","1.7160033436348",
"0.698970004336019","1.39794000867204","1","0.301029995663981","0.301029995663981","0.477121254719662","0.301029995663981","0.301029995663981",
"0.301029995663981","0.477121254719662","0.301029995663981","0.301029995663981","0.845098040014257","0.301029995663981","0.301029995663981",
"0.477121254719662","0.698970004336019","1.23044892137827","1.41497334797082","1.95904139232109","1.5910646070265","1.53147891704226",
"1.14612803567824","1.57978359661681","1.34242268082221","0.778151250383644","0.301029995663981","0.301029995663981","0.477121254719662",
"0.301029995663981","1.20411998265592","0.845098040014257","1.17609125905568","1.20411998265592","0.698970004336019","0.301029995663981",
"0.698970004336019","0.698970004336019","0.903089986991944","1.14612803567824","0.301029995663981","0.602059991327962","0.301029995663981",
"0.845098040014257","0.698970004336019","0.698970004336019","0.301029995663981","0.698970004336019","0.301029995663981","0.301029995663981",
"0.301029995663981","0.477121254719662","0.301029995663981","0.301029995663981","0.301029995663981","0.301029995663981","0.301029995663981",
"0.602059991327962","0.301029995663981","0.845098040014257","1.92941892571429","1.27875360095283","0.698970004336019","1.38021124171161",
"1.20411998265592","1.38021124171161","1.14612803567824","1","1.07918124604762","1.17609125905568","0.845098040014257","0.698970004336019",
"0.778151250383644","0.301029995663981","0.845098040014257","1.64345267648619","1.46239799789896","1.34242268082221","1.34242268082221",
"0.778151250383644"))
data$Date<-as.factor(data$Date)
data$Site<-as.factor(data$Site)
data$Species<-as.factor(data$Species)
data$Mean<-as.numeric(data$Mean)
data %>%
group_by(Date, Species) %>%
mutate(s = sd(Mean)) %>%
group_by(Date) %>%
filter(!any(s == 0)) %>%
compare_means(Mean ~ Species, data = ., group.b = "Date", method = "t.test")
#> # A tibble: 11 × 9
#> Date .y. group1 group2 p p.adj p.format p.signif method
#> <fct> <chr> <chr> <chr> <dbl> <dbl> <chr> <chr> <chr>
#> 1 2021.08 Mean A B 0.718 1 0.718 ns T-test
#> 2 2021.09 Mean A B 0.451 1 0.451 ns T-test
#> 3 2021.10 Mean A B 0.0889 0.89 0.089 ns T-test
#> 4 2021.11 Mean A B 0.850 1 0.850 ns T-test
#> 5 2022.01 Mean A B 1 1 1.000 ns T-test
#> 6 2022.08 Mean A B 0.234 1 0.234 ns T-test
#> 7 2022.09 Mean A B 0.670 1 0.670 ns T-test
#> 8 2022.10 Mean A B 0.0707 0.78 0.071 ns T-test
#> 9 2022.11 Mean A B 0.783 1 0.783 ns T-test
#> 10 2022.12 Mean A B 0.399 1 0.399 ns T-test
#> 11 2023.01 Mean A B 0.255 1 0.255 ns T-test
Created on 2023-06-01 with reprex v2.0.2
|
76384487 | 76384848 | Here is my simple code:
package org.example;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.RowFactory;
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.StructField;
import org.apache.spark.sql.types.StructType;
import java.util.Arrays;
import java.util.List;
public class Main {
public static void writeOutput(Dataset<Row> df, String outputPath) {
df.write()
.option("header", "true")
.option("delimiter", "\t")
.csv(outputPath);
}
public static void main(String[] args) {
// Create a SparkSession
SparkSession spark = SparkSession.builder()
.appName("DataFrameWriter")
.getOrCreate();
// Create a DataFrame (assuming df is already defined)
List<Row> data = Arrays.asList(
RowFactory.create("John", 25, "New York"),
RowFactory.create("Alice", 30, "San Francisco"),
RowFactory.create("Bob", 35, "Chicago")
);
StructType schema = DataTypes.createStructType(new StructField[] {
DataTypes.createStructField("name", DataTypes.StringType, true),
DataTypes.createStructField("age", DataTypes.IntegerType, true),
DataTypes.createStructField("city", DataTypes.StringType, true)
});
Dataset<Row> df = spark.createDataFrame(data, schema);
// Specify the output path
String outputPath = "src/main/java/output";
// Call the writeOutput method
writeOutput(df, outputPath);
// Stop the SparkSession
spark.stop();
}
}
Here is my build.gradle file:
plugins {
id 'java'
}
group = 'org.example'
version = '1.0-SNAPSHOT'
repositories {
mavenCentral()
}
dependencies {
compileOnly 'org.apache.spark:spark-sql_2.12:3.2.0'
implementation 'org.apache.spark:spark-core_2.12:3.2.0'
testImplementation platform('org.junit:junit-bom:5.9.1')
testImplementation 'org.junit.jupiter:junit-jupiter'
}
test {
useJUnitPlatform()
}
And errors:
Task :Main.main() FAILED
Error: Unable to initialize main class org.example.Main
Caused by: java.lang.NoClassDefFoundError: org/apache/spark/sql/Dataset
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':Main.main()'.
> Process 'command '/Library/Java/JavaVirtualMachines/jdk-11.0.11.jdk/Contents/Home/bin/java'' finished with non-zero exit value 1
java -version:
java version "11.0.19" 2023-04-18 LTS
Java(TM) SE Runtime Environment 18.9 (build 11.0.19+9-LTS-224)
Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.19+9-LTS-224, mixed mode)
scala -version:
Scala code runner version 3.3.0 -- Copyright 2002-2023, LAMP/EPFL
Spark: version 3.4.0
Using Scala version 2.12.17 (OpenJDK 64-Bit Server VM, Java 17.0.7)
Could you tell me what could be wrong? Pretty simple code, just can't figure out what to check. I've already tried reinstalling everything.
| Trying to run simple code that writes a dataframe as a csv file using spark and Java. java.lang.NoClassDefFoundError: org/apache/spark/sql/Dataset | Avoid using compileOnly directive for dependencies which implementation will be needed during runtime as stated on Gradle's Java library plugin user guide https://docs.gradle.org/current/userguide/java_library_plugin.html and blog https://blog.gradle.org/introducing-compile-only-dependencies
|
76380747 | 76381513 | I have two items positioned vertically, and I'd like the narrower one is as wide as the wider one.
My code looks like
<div class="flex flex-col items-end">
<div>This should take all the space</div>
<div class="flex flex-row gap-x-4">
<div> this is the first element</div>
<div> this is the second element</div>
</div>
</div>
and produce
However, I would like the result to be
items-end is needed because the items are displayed on the right side of the page.
I have tried to mess with positioning, but I cannot achieve the final result I'm looking for.
Can anyone give me a hand on this?
| Give same width to items vertically positioned | You could shrink-wrap the container and then right align it:
<script src="https://cdn.tailwindcss.com"></script>
<div class="flex flex-col w-max ml-auto">
<div>This should take all the space</div>
<div class="flex flex-row gap-x-4">
<div> this is the first element</div>
<div> this is the second element</div>
</div>
</div>
You could also use a grid layout with one grid column track sized to max-content and then align the grid column track to the right:
<script src="https://cdn.tailwindcss.com"></script>
<div class="grid grid-cols-[max-content] justify-end">
<div>This should take all the space</div>
<div class="flex flex-row gap-x-4">
<div> this is the first element</div>
<div> this is the second element</div>
</div>
</div>
|
76382807 | 76382905 | I'm trying to send messages to my users from my server using Pusher Channels. My api receives a list of users and the message needs to be sent to all the users in the list. I can't group these users into a single channel and an individual channel has to be used for each user. This makes my api slow as the list of users can have a size of anything between 1 and 10000 (possibly more in the future), and pusher batch events can only accept a Event list of size 10.
I'm using .net 6 for my api
I've tried using batch events to try and improve performance; my code looks something like this,
var events = new List<Event>();
// count can be anything between 1 and 10000
for (int i = 1; i <= count; i++)
{
// creating a sample list of events
events.Add(new Event
{
Channel = string.Format("batch-channel-{0}", i),
EventName = "batch-event",
Data = new
{
Channel = string.Format("batch-channel-{0}", i),
Event = "batch-event",
Message = string.Format("{0} - sample message", i)
{
});
}
var result = new List<HttpStatusCode>();
int chunkSize = 10;
int totalChunks = (int)Math.Ceiling((double)events.Length / chunkSize);
for (int i = 0; i < totalChunks; i++)
{
var eventsChunk = events.Skip(i * chunkSize).Take(chunkSize).ToArray();
// publishing event lists of size 10
var chunkResult = await pusher.TriggerAsync(eventsChunk);
result.Add(chunkResult.StatusCode);
}
I tested this code with a Event list of size 10000 and it takes around 6 minutes to complete execution. I want to know if there is anything I'm missing and if I can somehow improve performance.
Any help is appreciated. Thank you.
| Is there a better way to publish messages using Pusher Channels' batch event? | If you are sending the same event to multiple channels then you can use the standard trigger endpoint but specify a list of the channels that you are broadcasting to. For example:
using PusherServer;
var options = new PusherOptions();
options.Cluster = "APP_CLUSTER";
var pusher = new Pusher("APP_ID", "APP_KEY", "APP_SECRET", options);
ITriggerResult result = await pusher.TriggerAsync(
new string[]{"my-channel-1", "my-channel-2", "my-channel-3"},
"my-event",
new { message: "hello world" });
This would trigger the event to the three specified channels. You can specify up to 100 channels in a single publish.
If you are sending a different event to each channel then the batch event endpoint you have mentioned would be the way forward. In this case you might look at multi-threading to or similar to be able to handle multiple batch triggers at the same time, rather than sequentially.
Source - https://pusher.com/docs/channels/server_api/http-api/#example-publish-an-event-on-multiple-channels
|
76384790 | 76384880 | I'm currently trying to webscrape websites for tables using pandas and I get this error for one of the links.
Here's a snippet of what causes the crash:
import pandas as pd
website_df = pd.read_html("https://ballotpedia.org/Roger_Wicker")
print(website_df)
Below is the error I get, does anyone know how to fix this?
Traceback (most recent call last):
File "C:\Users\miniconda3\lib\site-packages\pandas\io\parsers\python_parser.py", line 700, in _next_line
line = self._check_comments([self.data[self.pos]])[0]
IndexError: list index out of range
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\miniconda3\lib\site-packages\pandas\io\parsers\python_parser.py", line 385, in _infer_columns
line = self._next_line()
File "C:\Users\miniconda3\lib\site-packages\pandas\io\parsers\python_parser.py", line 713, in _next_line
raise StopIteration
StopIteration
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\legislators-current.py", line 15, in <module>
website_df = pd.read_html("https://ballotpedia.org/Roger_Wicker")
File "C:\Users\miniconda3\lib\site-packages\pandas\util\_decorators.py", line 331, in wrapper
return func(*args, **kwargs)
File "C:\Users\miniconda3\lib\site-packages\pandas\io\html.py", line 1205, in read_html
return _parse(
File "C:\Users\miniconda3\lib\site-packages\pandas\io\html.py", line 1011, in _parse
df = _data_to_frame(data=table, **kwargs)
File "C:\Users\miniconda3\lib\site-packages\pandas\io\html.py", line 890, in _data_to_frame
with TextParser(body, header=header, **kwargs) as tp:
File "C:\Users\miniconda3\lib\site-packages\pandas\io\parsers\readers.py", line 1876, in TextParser
return TextFileReader(*args, **kwds)
File "C:\Users\miniconda3\lib\site-packages\pandas\io\parsers\readers.py", line 1442, in __init__
self._engine = self._make_engine(f, self.engine)
File "C:\Users\miniconda3\lib\site-packages\pandas\io\parsers\readers.py", line 1753, in _make_engine
return mapping[engine](f, **self.options)
File "C:\Users\miniconda3\lib\site-packages\pandas\io\parsers\python_parser.py", line 122, in __init__
) = self._infer_columns()
File "C:\Users\miniconda3\lib\site-packages\pandas\io\parsers\python_parser.py", line 395, in _infer_columns
raise ValueError(
ValueError: Passed header=[1,2], len of 2, but only 2 lines in file
| Pandas Webscraping Errors | Set header=0. You're going to get a lot of dataframes, but you can parse them to get what you need.
website_df = pd.read_html("https://ballotpedia.org/Roger_Wicker", header=0)
|
76381459 | 76381521 | I am using fluentbit as a pod deployment where I am creating many fluentbit pods which are attached to azure blob containers. Since multiple pods exist I tried adding tolerations as I did on daemonset deployment but it did not work and failed. Also every time I delete and start the pods reinvests all the the again. Please advise on fixing these issues.
apiVersion: v1
kind: Pod
metadata:
name: deployment
spec:
volumes:
- name: config_map_name
configMap:
name: config_map_name
- name: pvc_name
persistentVolumeClaim:
claimName: pvc_name
containers:
- name: fluentbit-logger
image: fluent/fluent-bit:2.1.3
env:
- name: FLUENTBIT_USER
valueFrom:
secretKeyRef:
name: fluentbit-secret
key: user
- name: FLUENTBIT_PWD
valueFrom:
secretKeyRef:
name: fluentbit-secret
key: pwd
resources:
requests:
memory: "32Mi"
cpu: "50m"
limits:
memory: "64Mi"
cpu: "100m"
securityContext:
runAsUser: 0
privileged: true
volumeMounts:
- name: config_map_name
mountPath: "/fluent-bit/etc"
- name: pvc_name
mountPath: mount_path
tolerations:
- key: "dedicated"
operator: "Equal"
value: "sgelk"
effect: "NoSchedule"
- key: "dedicated"
operator: "Equal"
value: "kafka"
effect: "NoSchedule"
Getting the error as below
error: error validating "/tmp/fluentbit-deploy.yaml": error validating data: ValidationError(Pod.spec.containers[0]): unknown field "tolerations" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
| Adding tolerations to fluentbit pod and making it persistent | The tolerations attribute needs to be set on the pod, but you are attempting to set it on a container (that's why you see the error "unknown field "tolerations" in io.k8s.api.core.v1.Container"). You would need to write:
apiVersion: v1
kind: Pod
metadata:
name: deployment
spec:
volumes:
- name: config_map_name
configMap:
name: config_map_name
- name: pvc_name
persistentVolumeClaim:
claimName: pvc_name
containers:
- name: fluentbit-logger
image: fluent/fluent-bit:2.1.3
env:
- name: FLUENTBIT_USER
valueFrom:
secretKeyRef:
name: fluentbit-secret
key: user
- name: FLUENTBIT_PWD
valueFrom:
secretKeyRef:
name: fluentbit-secret
key: pwd
resources:
requests:
memory: "32Mi"
cpu: "50m"
limits:
memory: "64Mi"
cpu: "100m"
securityContext:
runAsUser: 0
privileged: true
volumeMounts:
- name: config_map_name
mountPath: "/fluent-bit/etc"
- name: pvc_name
mountPath: mount_path
tolerations:
- key: "dedicated"
operator: "Equal"
value: "sgelk"
effect: "NoSchedule"
- key: "dedicated"
operator: "Equal"
value: "kafka"
effect: "NoSchedule"
|
76382480 | 76382917 | I am trying to create a playbook where I want to perform a simple debug task after cpu load is below 2.0.
I have this so far in cpu-load.yml:
---
- name: Check CPU load and wait
hosts: localhost
gather_facts: yes
tasks:
- name: Check cpu load
shell: uptime | awk -F 'load average:' '{print $2}' | awk -F ', ' '{print $1}'
register: cpu_load
- name: Wait until cpu load is bellow 2.0
wait_for:
timeout: 300
delay: 10
shell: Do something here
msg: "cpu load is bellow 2.0"
- name: Continue jobs
debug:
msg: "CPU load is bellow 2.0. Continue!!!"
Now I am not sure how to make the task wait for the cpu load to go bellow 2.0. Can you guys help?
| Ansible - starting a task after cpu load is below 2.0 | You need to put an until loop around your "check cpu load" task:
- hosts: localhost
gather_facts: false
tasks:
- name: Check cpu load
shell: uptime | awk -F 'load average:' '{print $2}' | awk -F ', ' '{print $1}'
register: cpu_load
until: cpu_load.stdout|float < 2.0
retries: 300
delay: 1
- name: Some other task
debug:
msg: hello world
This will wait up to five minutes (300 retries with a 1-second delay) for the load average to drop below 2.0.
There are probably better ways to get the current 1-minute CPU load; reading from /proc/loadavg may be easiest:
- hosts: localhost
gather_facts: false
tasks:
- name: Check cpu load
command: cat /proc/loadavg
register: cpu_load
until: cpu_load.stdout.split()|first|float < 2.0
retries: 300
delay: 1
- name: Some other task
debug:
msg: hello world
|
76382514 | 76382923 | How to load a separate JS file in Shopware 6 using webpack?
What?
I'm trying to load a separate javascript file next to the all.js file by using WebPack.
Why?
The all.js file can get really big and you're loading unnecessary javascript on a page. So by using code splitting (which should be possible since WebPack is implemented in Shopware 6) and dynamic imports you could stop loading unnecessary javascript.
What I've tried
I've added a webpack.config.js file inside the root of my theme plugin like so:
module.exports = {
entry: {
main: './src/main.js',
separateFile: './src/js/separate.js'
},
output: {
filename: '[name].js'
},
optimization: {
splitChunks: {
chunks: 'all',
},
},
};
After running bin/build-storefront.sh there is no separate JS file added in the public folder.
I'm also trying to dynamically load this JS file in src/Resources/app/storefront/src/main.js but this results in a 404 since the separate file doesn't exist.
| How can I use Webpack to load a separate JS file in Shopware 6 and improve web performance? | This will not work since all pre-compiled assets of plugins are collected in the ThemeCompiler and concatenated into one single script. This is done in PHP since node is not a requirement for production environments.
You could try to add separate scripts as additional custom assets, but you would still have to extend the template to add the corresponding script tags manually.
|
76384679 | 76384898 | Context:
I have a datacube with 3 variables (3D arrays, dims:time,y,x). The datacube is too big to fit in memory so I chunk it with xarray/dask. I want to apply a function to every cell in x,y of every variable in my datacube.
Problem:
My method takes a long time to load only one cell (1 minute) and I have to do that 112200 times. I use a for loop with dataset.variable.isel(x=i, y=j).values to load a single 1D array from my variables. Is there a better way to do that ? Also, knowing my dataset is chunked, is there a way to do that in parallel for all the chunks at once ?
Code example:
# Setup
import xarray as xr
import numpy as np
# Create the dimensions
x = np.linspace(0, 99, 100)
y = np.linspace(0, 349, 350)
time = np.linspace(0, 299, 300)
# Create the dataset
xrds= xr.Dataset()
# Add the dimensions to the dataset
xrds['time'] = time
xrds['y'] = y
xrds['x'] = x
# Create the random data variables with chunking
chunksize = (10, 100, 100) # Chunk size for the variables
data_var1 = np.random.rand(len(time), len(y), len(x))
data_var2 = np.random.rand(len(time), len(y), len(x))
data_var3 = np.random.rand(len(time), len(y), len(x))
xrds['data_var1'] = (('time', 'y', 'x'), data_var1, {'chunks': chunksize})
xrds['data_var2'] = (('time', 'y', 'x'), data_var2, {'chunks': chunksize})
xrds['data_var3'] = (('time', 'y', 'x'), data_var3, {'chunks': chunksize})
#### ---- My Attempt ---- ####
# Iterate through all the variables in my dataset
for var_name, var_data in xrds.data_vars.items():
# if variable is 3D
if var_data.shape == (xrds.dims['time'], xrds.dims['y'], xrds.dims['x']):
# Iterate through every cell of the variable along the x and y axis only
for i in range(xrds.dims['y']):
for j in range(xrds.dims['x']):
# Load a single 1D cell into memory (len(cell) = len(time))
print(xrds.v.isel(y=i,x=j).values)
| Chunked xarray: load only 1 cell in memory efficiently | I find that explicitly iterating over the xarray is faster than isel(), by about 10%.
Example:
for var_name, var_data in xrds.data_vars.items():
# if variable is 3D
if var_data.shape == (xrds.dims['time'], xrds.dims['y'], xrds.dims['x']):
# Iterate through every cell of the variable along the x and y axis only
for i_array in xrds['data_var1'].transpose('x', 'y', 'time'):
x_coordinate = i_array.x.item()
for cell in i_array.transpose('y', 'time'):
y_coordinate = cell.y.item()
# Do something with cell
This takes 17.38s, versus 20.47s for the original.
PS: The line chunksize = (10, 100, 100) seems very suspicious to me. It seems like if you want to load an array corresponding to the entire time axis at once, the chunks should be set so that this doesn't require looking at multiple chunks. It seems like chunksize = (len(time), 100, 100) would be more efficient. However, I benchmarked this both ways and it doesn't make a difference for this data size. May make a difference on your larger problem.
|
76381460 | 76381528 | I have the data below:
time=c(200,218,237,237,237,237,237,246,246,246,257,257,257,272,272,272,294,294,294)
location=c("A","A","D","C","A","B","B","D","C","B","D","C","B","D","C","B","D","C","B")
value=c(0,774,0,0,2178,0,2178,0,1494,2644,1326,1504,4188,3558,1385,5013,12860,829,3483)
dataA=data.frame(time,location,value)
and I made a graph.
ggplot(data=dataA, aes(x=time, y=value))+
geom_area(aes(group=location, fill=location), position="stack", linetype=1, size=0.5 ,colour="black") +
scale_fill_discrete(breaks=c("A","B","C","D"), labels=c("Main_town","B","C","D"))+
theme_classic(base_size=18, base_family="serif")+
theme(legend.position="right",
axis.line=element_line(linewidth=0.5, colour="black"))+
windows(width=5.5, height=5)
I changed one of the legend label from A to main_town using scale_fill_discrete(). Then color is automatically generated.
I want to change this color according to my preference. When I add a code, scale_fill_manual(values=c("darkblue","darkred","khaki4","darkgreen"))+ the below message shows up and the graph is before I changed legend label.
Scale for fill is already present.
Adding another scale for fill, which will replace the existing scale.
How can I change colors when using scale_fill_discrete()? I want to change colors to "darkblue","darkred","khaki4","darkgreen"
Could you please let me know how to do that? Or do you let me know simply how to change legend labels maintaining colors I want?
Always many thanks!!!
| How to change colors when using scale_fill_discrete in R? | I think you need scale_fill_discrete(type = c(...)).
library(ggplot2)
ggplot(data=dataA, aes(x=time, y=value))+
geom_area(aes(group=location, fill=location), position="stack", linetype=1, size=0.5 ,colour="black") +
scale_fill_discrete(breaks=c("A","B","C","D"), labels=c("Main_town","B","C","D"),
type=c("darkblue","darkred","khaki4","darkgreen"))+
theme_classic(base_size=18, base_family="serif")+
theme(legend.position="right",
axis.line=element_line(linewidth=0.5, colour="black"))
|
76381485 | 76381539 | I have the following code:
var expressions = new List<IQueryable<Container>>();
var containers1 = containers
.Where(x => EF.Functions.Like(x.ContainerReference1, $"%{message.SearchValue}%")
|| EF.Functions.Like(x.ContainerReference2, $"%{message.SearchValue}%"))
.OrderBy(x => x.ContainerReference1)
.ThenBy(x => x.ContainerReference2)
.ThenByDescending(x => x.DateUpdated);
expressions.Add(containers1);
var containers2 = containers
.Where(x => EF.Functions.Like(x.Description, $"%{message.SearchValue}%"))
.OrderBy(x => x.Description)
.ThenByDescending(x => x.DateUpdated);
expressions.Add(containers2);
var containers3 = containers.Where(x => x.ContactEmails
.OrderBy(y => y.Email)
.ThenBy(y => y.DisplayName)
.Any(y => EF.Functions.Like(y.Email, $"%{message.SearchValue}%")
|| EF.Functions.Like(y.DisplayName, $"%{message.SearchValue}%")))
.OrderByDescending(x => x.DateUpdated);
expressions.Add(containers3);
var containers4 = containers
.Where(x => EF.Functions.Like(x.Keywords, $"%{message.SearchValue}%"))
.OrderBy(x => x.Keywords)
.ThenByDescending(x => x.DateUpdated);
expressions.Add(containers4);
containers = expressions.Aggregate((acc, i) => acc.Union(i));
But after .Union operation sorting is reset.
How can I prevent resetting of sorting?
| C# IQueryable .Union reset sorting | Union operator does not preserve the order of the elements. You need to dynamically construct the sorting logic based on the presence of data
var expressions = new List<IQueryable<Container>>();
var sortingExpressions = new List<Func<IQueryable<Container>, IOrderedQueryable<Container>>>();
var containers1 = containers
.Where(x => EF.Functions.Like(x.ContainerReference1, $"%{message.SearchValue}%")
|| EF.Functions.Like(x.ContainerReference2, $"%{message.SearchValue}%"));
if (containers1.Any())
{
var containers1Sorting = new Func<IQueryable<Container>, IOrderedQueryable<Container>>(x => x
.OrderBy(y => y.ContainerReference1)
.ThenBy(y => y.ContainerReference2)
.ThenByDescending(y => y.DateUpdated));
expressions.Add(containers1);
sortingExpressions.Add(containers1Sorting);
}
var containers2 = containers
.Where(x => EF.Functions.Like(x.Description, $"%{message.SearchValue}%"));
if (containers2.Any())
{
var containers2Sorting = new Func<IQueryable<Container>, IOrderedQueryable<Container>>(x => x
.OrderBy(y => y.Description)
.ThenByDescending(y => y.DateUpdated));
expressions.Add(containers2);
sortingExpressions.Add(containers2Sorting);
}
var containers3 = containers
.Where(x => x.ContactEmails
.Any(y => EF.Functions.Like(y.Email, $"%{message.SearchValue}%")
|| EF.Functions.Like(y.DisplayName, $"%{message.SearchValue}%")));
if (containers3.Any())
{
var containers3Sorting = new Func<IQueryable<Container>, IOrderedQueryable<Container>>(x => x
.OrderBy(y => y.ContactEmails.OrderBy(z => z.Email).ThenBy(z => z.DisplayName))
.OrderByDescending(x => x.DateUpdated));
expressions.Add(containers3);
sortingExpressions.Add(containers3Sorting);
}
var containers4 = containers
.Where(x => EF.Functions.Like(x.Keywords, $"%{message.SearchValue}%"));
if (containers4.Any())
{
var containers4Sorting = new Func<IQueryable<Container>, IOrderedQueryable<Container>>(x => x
.OrderBy(y => y.Keywords)
.ThenByDescending(y => y.DateUpdated));
expressions.Add(containers4);
sortingExpressions.Add(containers4Sorting);
}
var mergedContainers = expressions.Aggregate((acc, i) => acc.Union(i));
if (sortingExpressions.Any())
{
var mergedSorting = sortingExpressions
.Aggregate((acc, next) => q => next(acc(q)));
containers = mergedSorting(mergedContainers);
}
else
{
containers = mergedContainers.OrderByDescending(x => x.DateUpdated);
}
|
76381526 | 76381562 | I have a .json file but I got the tokenId numbering wrong. I need to increase all values of "tokenId" by 1 number
[
{
"Background": "Red",
"Body": "Tunn",
"Hat": "Bambu",
"Outfit": "Pirate",
"Expression": "Sad",
"Accessory": "Rifle",
"tokenId": 0
},
{
"Background": "Lilac",
"Body": "Tunn",
"Hat": "Bicorn",
"Outfit": "Pirate",
"Expression": "Angry",
"Accessory": "Balloons",
"tokenId": 1
},
...
{
"Background": "Green",
"Body": "Tunn",
"Hat": "Bicorn",
"Outfit": "Pirate",
"Expression": "Sad",
"Accessory": "Balloons",
"tokenId": 3000
},
is it possible to do this with python? i created this .json file with python.
I tried this code, but I get an error
import json
with open('traits.json') as f:
data = json.load(f)
for item in data['tokenId']:
item['tokenId'] = item['tokenId'].replace([int('+1')])
with open('new_data.json', 'w') as f:
json.dump(data, f)
TypeError: list indices must be integers or slices, not str
Thank you!
| How can I use Python to increment 'tokenId' values in a .json file? | To increase the values of the "tokenId" field in your JSON file by 1, you can modify your code as follows:
import json
with open('traits.json') as f:
data = json.load(f)
for item in data:
item['tokenId'] += 1
with open('new_data.json', 'w') as f:
json.dump(data, f)
In your original code, you were trying to access data['tokenId'] as if it was a list, but it is actually a dictionary. Instead, you need to iterate over the list data and update the "tokenId" field of each item. By using item['tokenId'] += 1, you increment the value of "tokenId" by 1.
Finally, the modified data is saved to a new JSON file named "new_data.json" using json.dump(data, f).
After running this code, the "new_data.json" file will contain the updated "tokenId" values with an increment of 1.
|
76382811 | 76382942 | I have created an index file so that the information entered here is added to the created element
this is for a review section
the index.html file is here, and includes the CSS and js
let name = document.querySelector('.name').value;
let message = document.querySelector('.message').value;
let btn = document.getElementById('button');
let div = document.querySelector('.items')
btn.addEventListener('click', ()=>{
let item = document.createElement('div')
let inner = `
<h3>${name}</h3>
<p>${message}</p>
`
item.className = "message-item"
item.innerHTML = inner
div.append(item)
});
html, body{
padding: 0;
margin: 0;
}
.msg{
padding: 2em;
margin: 2em;
border-radius: 2vh;
height: 70vh;
display: flex;
align-items: center;
justify-content: left;
flex-direction: column;
background-color: #1e90ff;
}
.items{
height: 65vh;
overflow: scroll;
color: white;
width: 100%;
overflow-x: hidden;
margin: 10px;
}
input{
padding: 10px;
border: none;
border-radius: 8px;
outline: none;
font-size: 1em;
}
#button{
padding: 10px 20px;
border-radius: 8px;
border: none;
font-size: 1em;
}
.button{
padding: 10px 20px;
border-radius: 8px;
border: none;
font-size: 1em;
}
.message-item{
background-color: black;
padding: 1em;
border-radius: 8px;
margin: 3px;
}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title></title>
</head>
<body>
<div class="msg">
<div class="items"></div>
<div class="input">
<input type="text" class="name" placeholder="Name">
<input type="text" class="message" placeholder="Message">
<button id="button">Submit</button>
<button type="reset">Reset</button>
</div>
</div>
</body>
</html>
So I am expecting it to append the elements which have different values
example once i enter the **name **"harry" and **message **as "this is the message"
and then i reset and enter another **name **and **message **then the newly created element should display the newly entered **name **and message
| Update input values into elements only JS | Your name variable should be a pointer to the element, not the value.
Also, you should clear the input after adding.
const
name = document.querySelector('.name'),
message = document.querySelector('.message'),
btn = document.getElementById('button'),
div = document.querySelector('.items');
const handleAdd = (e) => {
div.insertAdjacentHTML('beforeend', `
<div class="message-item">
<h3>${name.value}</h3>
<p>${message.value}</p>
</div>
`);
name.value = ''; // Clear name
message.value = ''; // Clear message
};
btn.addEventListener('click', handleAdd);
html,
body {
padding: 0;
margin: 0;
}
.msg {
padding: 2em;
margin: 2em;
border-radius: 2vh;
height: 70vh;
display: flex;
align-items: center;
justify-content: left;
flex-direction: column;
background-color: #1e90ff;
}
.items {
height: 65vh;
overflow: scroll;
color: white;
width: 100%;
overflow-x: hidden;
margin: 10px;
}
input {
padding: 10px;
border: none;
border-radius: 8px;
outline: none;
font-size: 1em;
}
#button {
padding: 10px 20px;
border-radius: 8px;
border: none;
font-size: 1em;
}
.button {
padding: 10px 20px;
border-radius: 8px;
border: none;
font-size: 1em;
}
.message-item {
background-color: black;
padding: 1em;
border-radius: 8px;
margin: 3px;
}
<div class="msg">
<div class="items"></div>
<div class="input">
<input type="text" class="name" placeholder="Name">
<input type="text" class="message" placeholder="Message">
<button id="button">Submit</button>
<button type="reset">Reset</button>
</div>
</div>
A better approach
A better example would be to use a form. This way you can take advantage of built-in form validation, submission, and resetting.
For example, you can call elements by their name and you have the added bonus of Enter key support.
Enter a name
Press Tab
Enter a message
Press Enter
The item is added
The form is cleared
Focus is sent to the name
const handleAdd = (e) => {
e.preventDefault(); // Prevent page from navigating
const
form = e.target,
formElements = form.elements,
parent = form.closest('.msg'),
items = parent.querySelector('.items');
items.insertAdjacentHTML('beforeend', `
<div class="message-item">
<h3>${formElements.name.value}</h3>
<p>${formElements.message.value}</p>
</div>
`);
formElements.name.value = ''; // Clear name
formElements.message.value = ''; // Clear message
formElements.name.focus();
};
document.forms.namedItem('new-msg')
.addEventListener('submit', handleAdd);
html,
body {
padding: 0;
margin: 0;
}
.msg {
padding: 2em;
margin: 2em;
border-radius: 2vh;
height: 70vh;
display: flex;
align-items: center;
justify-content: left;
flex-direction: column;
background-color: #1e90ff;
}
.items {
height: 65vh;
overflow: scroll;
color: white;
width: 100%;
overflow-x: hidden;
margin: 10px;
}
input {
padding: 10px;
border: none;
border-radius: 8px;
outline: none;
font-size: 1em;
}
.form-btn {
padding: 10px 20px;
border-radius: 8px;
border: none;
font-size: 1em;
}
.message-item {
background-color: black;
padding: 1em;
border-radius: 8px;
margin: 3px;
}
<div class="msg">
<div class="items"></div>
<form id="new-msg" autocomplete="off">
<input type="text" name="name" placeholder="Name" required>
<input type="text" name="message" placeholder="Message">
<button type="submit" class="form-btn">Submit</button>
<button type="reset" class="form-btn">Reset</button>
</form>
</div>
LocalStorage
Here is an example of local storage. The main idea here is how to store and restore the state of the messages.
const MESSAGES_KEY = "messages";
const main = () => {
// Restore all messages
const messageContainer = document.querySelector(".items");
__retrieveAllMessages().forEach((message) => {
insertMessage(message, messageContainer);
});
// Add event listener
document.forms.namedItem("new-msg").addEventListener("submit", handleAdd);
};
const saveMessage = (message) => {
__saveAllMessages(__retrieveAllMessages().concat(message));
};
const insertMessage = (message, container) => {
container.insertAdjacentHTML("beforeend", messageToHtml(message));
};
const messageToHtml = ({ name, message }) => `
<div class="message-item">
<h3>${name}</h3>
<p>${message}</p>
</div>
`;
const handleAdd = (e) => {
e.preventDefault(); // Prevent page from navigating
const form = e.target,
message = {
name: form.elements.name.value,
message: form.elements.message.value,
};
saveMessage(message);
insertMessage(message, form.closest(".msg").querySelector(".items"));
form.elements.name.value = ""; // Clear name
form.elements.message.value = ""; // Clear message
form.elements.name.focus();
};
const __retrieveAllMessages = () => {
return JSON.parse(localStorage.getItem(MESSAGES_KEY) ?? "[]");
};
const __saveAllMessages = (messages = []) => {
return localStorage.setItem(MESSAGES_KEY, JSON.stringify(messages));
};
main();
|
76381469 | 76381565 | I have the following xml:
<?xml version="1.0" encoding="utf-8"?>
<wfs:FeatureCollection xmlns:wfs="http://www.opengis.net/wfs/2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.opengis.net/wfs/2.0 http://www.wfs.nrw.de/aaa-suite/schema/ogc/wfs/2.0/wfs.xsd" timeStamp="2023-06-01T13:31:53.444+02:00" numberReturned="0" numberMatched="9359426"/>
How can I extract the value of numberMatched using an xml parser like fast-xml-parser in NodeJS?
| How to extract data from xml in NodeJS? | You need to set ignoreAttributes option to false
import { XMLParser } from "fast-xml-parser";
const XMLdata = `<?xml version="1.0" encoding="utf-8"?>
<wfs:FeatureCollection xmlns:wfs="http://www.opengis.net/wfs/2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.opengis.net/wfs/2.0 http://www.wfs.nrw.de/aaa-suite/schema/ogc/wfs/2.0/wfs.xsd" timeStamp="2023-06-01T13:31:53.444+02:00" numberReturned="0" numberMatched="9359426"/>`;
const parser = new XMLParser({
ignoreAttributes: false,
attributeNamePrefix: ""
});
let jObj = parser.parse(XMLdata);
console.log(jObj["wfs:FeatureCollection"].numberMatched);
|
76384843 | 76384912 | I use Retrofit and Coroutines to fetch a list of languages from an API. My ViewModel loads the list of languages whenever it is created. I want to load this list into a spinner, but whenever I update the spinner data, the data in the ViewModel hasn't completely loaded.
Here my viewmodel class to load data from an api
class TranslateViewModel : ViewModel() {
var langToCode = HashMap<String, String>()
var codeToLang = HashMap<String, String>()
private val translateRepository = TranslateRepository()
init {
viewModelScope.launch {
val langList = try {
translateRepository.getLangList()
} catch (e: IOException) {
Log.e(TAG, "IOException", e)
return@launch
} catch (e: HttpException) {
Log.e(TAG, "HTTPException", e)
return@launch
}
if (langList.isSuccessful && langList.body() != null) {
codeToLang = langList.body()!!.langList
langToCode = codeToLang.map { (key, value) ->
value to key
}.toMap() as HashMap<String, String>
} else {
Log.d(TAG, "Get lang list failure")
}
}
}
}
I want to load my list of languages into my spinner but my viewmodel is taking too long to load the data, so the langList variable is alway empty.
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
super.onViewCreated(view, savedInstanceState)
viewBinding.apply {
sourceLanguage.apply {
val langList = viewModel.langToCode.keys.toList()
val arrayAdapter = ArrayAdapter (
context,
android.R.layout.simple_spinner_item,
langList
).also { it.setDropDownViewResource(
android.R.layout.simple_spinner_dropdown_item
)
}
this.adapter = arrayAdapter
}
}
I want to know how to update my UI immediately after my coroutine has finished loading data. I appreciate all of your help, and please forgive me for my poor English skills.
| How can I update my UI after a coroutine is completed in Kotlin for Android? | In Your case update your viewmodel as below-
class TranslateViewModel : ViewModel() {
private val _langList = MutableLiveData<List<String>>()
val langList: LiveData<List<String>> get() = _langList
init {
viewModelScope.launch {
if (langList.isSuccessful && langList.body() != null) {
codeToLang = langList.body()!!.langList
langToCode = codeToLang.map { (key, value) ->
value to key
}.toMap() as HashMap<String, String>
_langList.value = langToCode.keys.toList() // Update the LiveData value
} else {
Log.d(TAG, "Get lang list failure")
}
}
}
}
In your UI code to observe the LiveData and update the spinner as below-
viewModel.langList.observe(viewLifecycleOwner, { langList ->
val arrayAdapter = ArrayAdapter(
context,
android.R.layout.simple_spinner_item,
langList
).also { it.setDropDownViewResource(
android.R.layout.simple_spinner_dropdown_item
)
}
sourceLanguage.adapter = arrayAdapter
})
Follow this link for more detail - https://developer.android.com/topic/libraries/architecture/livedata
|
76382497 | 76382988 | I'm working on a block based programming language based of of Google's Blockly. I need to make a block that loops the contents forever, for making games.
I tried a while (true) loop but it froze. Is there any way to make a forever loop that won't freeze and will let other scripts run?
Thanks!
| How to make a forever loop in JS not freeze | check setTimeout() : https://developer.mozilla.org/en-US/docs/Web/API/setTimeout
Something like that to loop indefinitely without blocking the main thread (you should probably design a way to break the loop at some point) :
function doSomeStuff() {
// do some stuff…
setTimeout(() => {
doSomeStuff();
}, 1000);
}
|
76384804 | 76384938 | [Why am i getting "formula parse error" when I try to classify the ages (column H) into groups using the following formula? And is there a better way? Thanks for your assistance:
=IF (H19<20, “0-19”, IF ((H19>=20 AND H19<40), “20-39”, IF ((H19>=40 AND H19<60), “40-59”, IF ((H19>=60 AND H19<70), “60-69”, IF (H19>=70, ">= 70", “WRONG”)))))
I was expecting to output the Age column into strings based on my category definitions.
| What is the correct syntax to classify ages into groups using IF statements in Google Sheets? | The portions that you have formatted as (H19>=20 AND H19<40) should be changed to AND(H19>=20, H19<40). Your final formula should then be:
=IF(H19<20, “0-19”,
IF(AND(H19>=20, H19<40), “20-39”,
IF(AND(H19>=40, H19<60), “40-59”,
IF(AND(H19>=60, H19<70), “60-69”,
IF(H19>=70, ">= 70", “WRONG”)))))
Alternatively:
=IFS(OR(NOT(ISNUMBER(H19)),H19<0), "WRONG",
H19<20, "0-19",
AND(H19>=20, H19<40), "20-39",
AND(H19>=40, H19<60), "40-59",
AND(H19>=60, H19<70), "60-69",
H19>=70, ">= 70")
|
76381508 | 76381592 | So what I have is two Pandas dataframes in Python with a large number of xyz-coordinates. One of them will be used to mask/remove some coordinates in the other one, but the problem is that the coordinates are very slightly different so that I cannot simply remove duplicates. As an example, let's say they look like this:
df1 = pd.DataFrame(data=None, columns=['x', 'y', 'z'])
df1.x = [104245, 252355, 547364, 135152]
df1.y = [842714, 135812, 425328, 124912]
df1.z = [125125, 547574, 364343, 346372]
df2 = pd.DataFrame(data=None, columns=['x', 'y', 'z'])
df2.x = [104230, 547298]
df2.y = [842498, 424989]
df2.z = [124976, 364001]
What I then want is for the first and second xyz-rows in df2 to remove the first and third row in df1. My idea was to create new columns with rounded values, compare those, and remove based on those. It would look something like this:
df1['id'] = np.linspace(0,len(df1)-1,len(df1))
df2['id'] = np.linspace(0,len(df2)-1,len(df2))
df3 = df1.round({'x': -3, 'y': -3, 'z': -3})
df4 = df2.round({'x': -3, 'y': -3, 'z': -3})
df5 = df3.merge(df4, on=['x', 'y', 'z'], how='inner')
df6 = df1[~df1.index.isin(df5.id_x)]
This works fine to remove some of the values, but often they round to different places. I was hoping with help if there is a better method to mask those values which are simply closest in all three coordinates. Maybe that it finds the closest xyz-pair between df1 and df2 and masks those pairs. If anyone has any ideas I would really appreciate it!
| Masking a pandas column based on another column with slightly different values | You can use numpy broadcasting to consider the individual distances between the coordinates:
# convert DataFrames to numpy arrays
a1 = df1.to_numpy()
a2 = df2.to_numpy()
# define a distance below which the coordinates are considered equal
thresh = 500
# compute the distances, identify matches on all coordinates
matches = (abs(a1[:,None]-a2) <= thresh).all(axis=-1)
idx1, idx2 = np.where(matches)
# (array([0, 2]), array([0, 1]))
out = df1.drop(df1.index[idx1])
To consider the euclidean distance between the points (taking into account all coordinates simultaneously), use scipy.spatial.distance.cdist:
from scipy.spatial.distance import cdist
thresh = 1000
matches = cdist(a1, a2) <= thresh
idx1, idx2 = np.where(matches)
out = df1.drop(df1.index[idx1])
Output:
x y z
1 252355 135812 547574
3 135152 124912 346372
removing the single point from df1 that is closest to each row of df2 and below a threshold
from scipy.spatial.distance import cdist
thresh = 1000
dist = cdist(a1, a2)
idx = np.argmin(dist, axis=0)
out = df1.drop(df1.index[idx[dist[idx, np.arange(len(a2))] <= thresh]])
If the distance doesn't matter and you only want to remove the closest point:
from scipy.spatial.distance import cdist
dist = cdist(a1, a2)
idx = np.argmin(dist, axis=0)
out = df1.drop(df1.index[idx])
|
76382887 | 76382994 | I have the following little program in Python
from pathlib import Path
filename = Path("file.txt")
content = "line1\nline2\nline3\n"
with filename.open("w+", encoding="utf-8") as file:
file.write(content)
After running it I get the following file (as expected)
line1
line2
line3
However, depending on where the program runs, line ending is different.
If I run it in Windows, I get CRLF line termination:
$ file -k file.txt
file.txt: ASCII text, with CRLF line terminators
If I run it in Linux, I get LF line termination:
$ file -k file.txt
file.txt: ASCII text
So, I understand that Python is using the default from the system in which it runs, which is fine most of the times. However, in my case I'd like to fix the line ending style, no matter the system where I run the program.
How this could be done?
| How to fix the line ending style (either CRLF or LF) in Python when written a text file? | It is possible to explicitly specify the string used for newlines using the newline parameter. It works the same with open() and pathlib.Path.open().
The snippet below will always use Linux line endings \n:
from pathlib import Path
filename = Path("file.txt")
content = "line1\nline2\nline3\n"
with filename.open("w+", encoding="utf-8", newline='\n') as file:
file.write(content)
Setting newline='\r\n' will give Windows line endings and not setting it or setting newline=None (the default) will use the OS default.
|
76382888 | 76383031 | I have many JSON files with the following structure:
{
"requestId": "test",
"executionDate": "2023-05-10",
"executionTime": "12:02:22",
"request": {
"fields": [{
"geometry": {
"type": "Point",
"coordinates": [-90, 41]
},
"colour": "blue",
"bean": "blaCk",
"birthday": "2021-01-01",
"arst": "111",
"arstg": "rst",
"fct": {
"start": "2011-01-10",
"end": "2012-01-10"
}
}]
},
"response": {
"results": [{
"geom": {
"type": "geo",
"coord": [-90, 41]
},
"md": {
"type": "arstat",
"mdl": "trstr",
"vs": "v0",
"cal": {
"num": 4,
"comment": "message"
},
"bean": ["blue", "green"],
"result_time": 12342
},
"predictions": [{
"date": "2004-05-19",
"day": 0,
"count": 0,
"eating_stage": "trt"
}, {
"date": "2002-01-20",
"day": 1,
"count": 0,
"eating_stage": "arstg"
}, {
"date": "2004-05-21",
"day": 2,
"count": 0,
"eating_stage": "strg"
}, {
"date": "2004-05-22",
"day": 3,
"count": 0,
"eating_stage": "rst"
}
}
}
}
The predictions part can be very deep. I want to convert this JSON to a CSV with the following structure:
requestId
executionDate
executionTime
colour
predictions_date
predictions_day
predictions_count
predictions_eating_stage
test
2023-05-10
12:02:22
blue
2004-05-19
0
0
trt
test
2023-05-10
12:02:22
blue
2002-01-20
1
0
astrg
test
2023-05-10
12:02:22
blue
2004-05-21
2
0
strg
test
2023-05-10
12:02:22
blue
2004-05-22
3
0
rst
I tried the following code:
flat_json = pd.DataFrame(
flatten(json_data), index=[0]
)
The code results in every data point becoming a column, and I am not sure how to pivot longer where at the 'predictions' key using JSON functions in Python. I recognise that at this stage I could pivot longer using column names, but I feel like there is a cleaner way to achieve this.
| Partially flatten nested JSON and pivot longer | I would suggest simply extracting what you need. It seems very specific for it to be solved using specific parsing. Therefore I would start by creating two dataframes:
df_prediction = pd.DataFrame(example['response']['results'][0]['predictions'])
df_data = pd.DataFrame({x:y for x,y in example.items() if type(y)==str},index=[0])
Renaming columns in predictions:
df_prediction.columns = ['prediction_'+x for x in df_prediction]
Joining and adding the last piece of data (colour):
output = df_data.assign(colour = example['request']['fields'][0]['colour']).join(df_prediction,how='right').ffill()
Outputting:
requestId executionDate ... prediction_count prediction_eating_stage
0 test 2023-05-10 ... 0 trt
1 test 2023-05-10 ... 0 arstg
2 test 2023-05-10 ... 0 strg
3 test 2023-05-10 ... 0 rst
|
76383903 | 76384969 | This question is connected to [-> here].
I would like to reorganize the following nested dict please:
a = {
(0.0, 0.0): {'a': [25, 29, nan]},
(0.0, 2.0): {'a': [25, 29, nan], 'b': [25, 35, 31.0]},
(0.0, 4.0): {'b': [25, 35, 31.0]},
(2.0, 0.0): {'a': [25, 29, nan], 'c': [25, 26, 29.0]},
(2.0, 1.5): {'a': [25, 29, nan], 'c': [25, 26, 29.0]},
(2.0, 2.0): {'a': [25, 29, nan], 'b': [25, 35, 31.0]},
(2.0, 4.0): {'b': [25, 35, 31.0]},
(3.0, 3.0): {'d': [25, 31, 32.0]},
(3.0, 5.0): {'d': [25, 31, 32.0]},
(5.0, 0.0): {'c': [25, 26, 29.0]},
(5.0, 1.5): {'c': [25, 26, 29.0]},
(5.0, 3.0): {'d': [25, 31, 32.0]},
(5.0, 5.0): {'d': [25, 31, 32.0]},
(6.0, 1.0): {'e': [25, 28, 30.0]},
(6.0, 3.0): {'e': [25, 28, 30.0]},
(8.0, 1.0): {'e': [25, 28, 30.0]},
(8.0, 3.0): {'e': [25, 28, 30.0]}
}
I want to swap the inner and outer keys.
Some outer keys will duplicate and the value should become a list of lists. The result should be:
{'a': {(0.0, 0.0): [[25, 29, nan]],
(0.0, 2.0): [[25, 29, nan], [25, 35, 31.0]],
(2.0, 0.0): [[25, 29, nan], [25, 26, 29.0]],
(2.0, 1.5): [[25, 29, nan], [25, 26, 29.0]],
(2.0, 2.0): [[25, 29, nan], [25, 35, 31.0]]},
'b': {(0.0, 2.0): [[25, 29, nan], [25, 35, 31.0]],
(0.0, 4.0): [[25, 35, 31.0]],
(2.0, 2.0): [[25, 29, nan], [25, 35, 31.0]],
(2.0, 4.0): [[25, 35, 31.0]]},
'c': {(2.0, 0.0): [[25, 29, nan], [25, 26, 29.0]],
(2.0, 1.5): [[25, 29, nan], [25, 26, 29.0]],
(5.0, 0.0): [[25, 26, 29.0]],
(5.0, 1.5): [[25, 26, 29.0]]},
'd': {(3.0, 3.0): [[25, 31, 32.0]],
(3.0, 5.0): [[25, 31, 32.0]],
(5.0, 3.0): [[25, 31, 32.0]],
(5.0, 5.0): [[25, 31, 32.0]]},
'e': {(6.0, 1.0): [[25, 28, 30.0]],
(6.0, 3.0): [[25, 28, 30.0]],
(8.0, 1.0): [[25, 28, 30.0]],
(8.0, 3.0): [[25, 28, 30.0]]}
}
Intuition tells me pd.DataFrame with a .groupby() [and cull the NaN cells] would be the way to go...
df = pd.DataFrame(dict_vertices)
print(df.head(2))
0.0 2.0 ... 8.0 6.0
0.0 0.0 1.5 ... 1.0 3.0 3.0
a [25, 29, nan] [25, 29, nan] [25, 29, nan] ... NaN NaN NaN
c NaN [[25, 26, 29.0]] [[25, 26, 29.0]] ... NaN NaN NaN
[2 rows x 17 columns]
...but I don't know.
How do I reorganize the following nested dict please; where the value follows the outer key?
| Reorganize nested `dict` | You can use:
out = {}
for k1, d in a.items():
for k2 in d:
out.setdefault(k2, {})[k1] = list(d.values())
Output:
{'a': {(0.0, 0.0): [[25, 29, nan]],
(0.0, 2.0): [[25, 29, nan], [[25, 35, 31.0]]],
(2.0, 0.0): [[25, 29, nan], [[25, 26, 29.0]]],
(2.0, 1.5): [[25, 29, nan], [[25, 26, 29.0]]],
(2.0, 2.0): [[25, 29, nan], [[25, 35, 31.0]]]},
'b': {(0.0, 2.0): [[25, 29, nan], [[25, 35, 31.0]]],
(0.0, 4.0): [[25, 35, 31.0]],
(2.0, 2.0): [[25, 29, nan], [[25, 35, 31.0]]],
(2.0, 4.0): [[25, 35, 31.0]]},
'c': {(2.0, 0.0): [[25, 29, nan], [[25, 26, 29.0]]],
(2.0, 1.5): [[25, 29, nan], [[25, 26, 29.0]]],
(5.0, 0.0): [[25, 26, 29.0]],
(5.0, 1.5): [[25, 26, 29.0]]},
'd': {(3.0, 3.0): [[25, 31, 32.0]],
(3.0, 5.0): [[25, 31, 32.0]],
(5.0, 3.0): [[25, 31, 32.0]],
(5.0, 5.0): [[25, 31, 32.0]]},
'e': {(6.0, 1.0): [[25, 28, 30.0]],
(6.0, 3.0): [[25, 28, 30.0]],
(8.0, 1.0): [[25, 28, 30.0]],
(8.0, 3.0): [[25, 28, 30.0]]},
}
|
76381414 | 76381603 |
How can I make the bellow regex exclude matches that span across lines?
import re
reg = re.compile(r'\b(apple)(?:\W+\w+){0,4}?\W+(tree|plant|garden)')
reg.findall('my\napple tree in the garden')
reg.findall('apple\ntree in the garden')
The first one should match, the second one should not.
(Now both matches...)
| How to exclude linebreaks from a regex match in python? | Your \W matches newlines. To exclude them replace \W with [^\w\n]:
import re
reg = re.compile(r'\b(apple)(?:[^\n\w]+\w+){0,4}?[^\n\w]+(tree|plant|garden)')
print(reg.findall('my\napple tree in the garden'))
# [('apple', 'tree')]
print(reg.findall('apple\ntree in the garden'))
# []
|
76381292 | 76381604 | I'm creating a nestjs API so I'm using classes to declare my entities for example
export class Customer {
id: number;
name: string;
}
So now I'm working in my Customer Controller and I would like to type an get query param as customer.id because I'm thinking if some day the customer id data type changes to string automatically making my controller query param as string too.
@GET()
getCustomerById(@Params('id', id: Customer.id)) {
return this.customerService.getCustomerById(id))
}
Is it possible? Thanks
| How to declare a constant datatype using a class property datatype in typescript? | You can use TypeScript lookup types:
getCustomerById(@Params('id') id: Customer['id']) {}
|
76384902 | 76384970 | I have two data frames t1 and t2. I want a seaborn plot where it plots side by side for every variable using the for loop. I was able to achieve this but I fail when I try to set the customized x labels. How do I incorporate set_xlabel in to the for loop?
data1 = {
'var1': [1, 2, 3, 4],
'var2': [20, 21, 19, 18],
'var3': [5, 6, 7, 8]
}
data2 = {
'var1': [5. 2. 3. 5],
'var2': [21, 18, 3, 11]
'var3': [1, 9, 3, 6]
}
t1 = pd.DataFrame(data1)
t2 = pd.DataFrame(data2)
xlabel_list = ["new_var1", "new_var2", "new_var3"]
def fun1(df1, df2, numvar, new_label):
plt.tight_layout()
fig, ax = plt.subplots(1, 2)
sns.kdeplot(data = df1[numvar], linewidth = 3, ax=ax[0])
sns.kdeplot(data = df2[numvar], linewidth = 3, ax=ax[1])
ax[0].set_xlabel(new_label, weight='bold', size = 10)
ax[1].set_xlabel(new_label, weight='bold', size = 10)
for col in t1.columns: # how to incorporate the new_label parameter in the for loop along with col?
fun1(df1 = t1, df2 = t2, numvar = col, new_label??)
| Different X Labels for Different Variables | Use zip:
for col, new_label in zip(t1.columns, xlabels_list):
|
76382981 | 76383058 | I am trying to add the corresponding value from df to df1 for each time the name and week match in df1
df
Name Week Value
0 Frank Week 3 8.0
1 Bob Week 3 8.0
2 Bob Week 4 8.0
3 Elizabeth Week 3 4.0
4 Mario Week 2 1.5
5 Mario Week 3 2.5
6 Michelle Week 3 8.0
7 Michelle Week 4 1.0
8 Darwin Week 1 1.0
9 Darwin Week 2 0.5
10 Darwin Week 3 11.0
11 Collins Week 1 8.0
12 Collins Week 2 6.0
13 Collins Week 3 17.0
14 Collins Week 4 7.0
15 Alexis Week 1 1.5
16 Daniel Week 3 2.0
df1
Name Week Total
0 Frank Week 1 16
1 Frank Week 1 3
2 Frank Week 3 28
3 Frank Week 3 1
4 Frank Week 4 3
.. ... ... ...
310 Daniel Week 2 50
311 Daniel Week 3 56
312 Daniel Week 4 78
313 Kevin Week 4 162
314 Kevin Week 4 46
Expected:
df1
Name Week Total
0 Frank Week 1 16
1 Frank Week 1 3
2 Frank Week 3 **36**
3 Frank Week 3 **9**
4 Frank Week 4 3
.. ... ... ...
310 Daniel Week 2 50
311 Daniel Week 3 **58**
312 Daniel Week 4 78
313 Kevin Week 4 162
314 Kevin Week 4 46
| Add pre-defined value to DataFrame on each instance of matching index | Use a merge + assign:
out = (df1
.merge(df, how='left')
.assign(Total=lambda d: d['Total'].add(d.pop('Value'), fill_value=0))
)
Output:
Name Week Total
0 Frank Week 1 16.0
1 Frank Week 1 3.0
2 Frank Week 3 36.0
3 Frank Week 3 9.0
4 Frank Week 4 3.0
...
5 Daniel Week 2 50.0
6 Daniel Week 3 58.0
7 Daniel Week 4 78.0
8 Kevin Week 4 162.0
9 Kevin Week 4 46.0
|
76384338 | 76384983 | I am processing sales data, sub-setting across a combination of two distinct dimensions.
The first is a category as indicated by each of these three indicators ['RA','DS','TP']. There are more indicators in the data; however, those are the only ones of interest, and the others not mentioned but in the data can be ignored.
In combination with those indicators, I want to subset across varying time intervals 7 days back, 30 days back, 60 days back, 90 days back, 120 days back, and no time constraint
Without looping through this would create 18 distinct functions for those combinations of dimensions 3 categories x 6 time which was what I first started to do
for example a function that subsets on DS and 7 days back:
def seven_days_ds(df):
subset = df[df['Status Date'] > (datetime.now() - pd.to_timedelta("7day"))]
subset = subset[subset['Association Label']=="DS"]
grouped_subset = subset.groupby(['Status Labelled'])
status_counts_seven_ds = (pd.DataFrame(grouped_subset['Status Labelled'].count()))
status_counts_seven_ds.columns = ['Counts']
status_counts_seven_ds = status_counts_seven_ds.reset_index()
return status_counts_seven_ds #(the actual function is more complicated than this).
And then repeat this, but changing the subset criteria for each combination of category and time-delta for 18 combinations of the variables of interest. Obviously, this is not ideal.
Is there a way to have a single function that creates those 18 objects, or (ideally), a single object whose columns indicate the dimensions being subset on? ie counts_ds_7 etc.
Or is this not possible, and I'm stuck doing it the long way doing them all separately?
| Looping through combinations of subsets of data for processing | IIUC, you can use :
def crossubsets(df):
labels = ["RA", "DS", "TP"]
time_intervals = [7, 30, 60, 90, 120, None]
group_dfs = df.loc[
df["Association Label"].isin(labels)
].groupby("Association Label")
data = []
for l, g in group_dfs:
for ti in time_intervals:
s = (
g[g["Status Date"] > (pd.Timestamp.now() - pd.Timedelta(ti, "d"))]
if ti is not None else g
)
data.append(s["Status Labelled"].value_counts().rename(f"counts_{l}_{ti}"))
return pd.concat(data, axis=1) #with optional .T to have 18 rows instead of cols
|
76381561 | 76381629 | I'm using the Boston Housing data set from the MASS package, and working with splines from the gam package in R. However, an error is returned with this code:
library(gam)
library(MASS)
library(tidyverse)
Boston.gam <- gam(medv ~ s(crim) + s(zn) + s(indus) + s(nox) + s(rm) + s(age) + s(dis) + s(rad) + s(tax) + s(ptratio) + s(black) + s(lstat), data = Boston)
The error message is:
A smoothing variable encountered with 3 or less unique values; at least 4 needed
The variable that is causing the issue is chas, it only has two values, 1 and 0.
What is a test to determine if a column has 3 or fewer unique values so it can be eliminated from the spline analysis?
| How to find columns with three or fewer distinct values | Would this work?
You can use dplyr::n_distinct() to perform the unique check.
# Number of unique values
n_unique_vals <- map_dbl(Boston, n_distinct)
# Names of columns with >= 4 unique vals
keep <- names(n_unique_vals)[n_unique_vals >= 4]
# Model data
gam_data <- Boston %>%
dplyr::select(all_of(keep))
|
76381619 | 76381632 | I have a layout like this:
<com.google.android.material.textfield.TextInputLayout
android:id="@+id/user_description_input_layout"
android:layout_width="0dp"
android:layout_height="wrap_content"
android:layout_marginHorizontal="16dp"
android:layout_marginTop="16dp"
android:hint="Description"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent"
app:startIconContentDescription="Lalala">
<com.google.android.material.textfield.TextInputEditText
android:id="@+id/user_description_input"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:hint="description" />
</com.google.android.material.textfield.TextInputLayout>
It's height is exactly the size of ONE line, but I wish it could be the size of 2 line exactly.
I tried adding theses attributes to my TextInputEditText tag:
<com.google.android.material.textfield.TextInputEditText
...
android:maxLines="4"
android:scrollbars="vertical" />
But that made it to start as a 1 line height and stop at 2 as the user types on it. I would like it to be fixed on 2 from the begining, even if it did not have enough text to need 2 lines.
I also would like it to have a fixed size and allow the user to scroll vertically in case he add some text that is larger than 2 lines.
I know I COULD do it programatically by adding enough caracters until it has a 2 lines height and fix this heigh size and then clean the TextInputEditText, but that is such an ugly solution.
| Android - How to make the height of a TextInputEditText to be exactly of 2 lines? | Try this, in your TextInputEditText
android:minLines="2"
android:gravity="top"
|
76384389 | 76384989 | I have a PostMapping whith form where user can create a meeting and invite employees. My problem is that the employees are not saved to the database.
Here is my MeetingDTO:
@Data
@Builder
public class MeetingDto {
private Long id;
@NotEmpty(message = "Content could not be empty")
private String contentOfMeeting;
@FutureOrPresent
private LocalDateTime startOfMeeting;
private LocalDateTime endOfMeeting;
private Status status;
private List<Employee> employees;
private Visitor visitor;
}
Here is my controller:
@GetMapping("/visitors/new-meeting")
public String createMeetingForm(Model model) {
List<Employee> employeeList = employeeRepository.findAll();
model.addAttribute("employeeList", employeeList);
model.addAttribute("meeting", new Meeting());
return "visitors-createAMeeting";
}
@PostMapping("/visitors/new-meeting")
public String saveMeeting(@ModelAttribute("meeting") MeetingDto meetingDto) {
String nameOfVisitor;
Object principal = SecurityContextHolder.getContext().getAuthentication().getPrincipal();
if (principal instanceof UserDetails) {
nameOfVisitor = ((UserDetails)principal).getUsername();
} else {
nameOfVisitor = principal.toString();
}
Long visitorId = visitorRepository.findByEmailAddress(nameOfVisitor).getId();
meetingDto.setVisitor(visitorRepository.findById(visitorId).orElse(null));
meetingService.createMeeting(visitorId, meetingDto);
return "redirect:/visitors/home";
}
ServiceImpl:
@Service
public class MeetingServiceImpl implements MeetingService {
private MeetingRepository meetingRepository;
private EmployeeRepository employeeRepository;
private VisitorRepository visitorRepository;
@Autowired
public MeetingServiceImpl(MeetingRepository meetingRepository, EmployeeRepository employeeRepository,
VisitorRepository visitorRepository) {
this.meetingRepository = meetingRepository;
this.employeeRepository = employeeRepository;
this.visitorRepository = visitorRepository;
}
private Meeting mapToMeeting(MeetingDto meetingDto) {
return Meeting.builder()
.id(meetingDto.getId())
.contentOfMeeting(meetingDto.getContentOfMeeting())
.startOfMeeting(meetingDto.getStartOfMeeting())
.endOfMeeting(meetingDto.getEndOfMeeting())
.status(Status.valueOf(String.valueOf(Status.REJECTED)))
.employees(meetingDto.getEmployees())
.build();
}
@Override
public void createMeeting(Long visitorId, MeetingDto meetingDto) {
Visitor visitor = visitorRepository.findById(visitorId).orElse(null);
Meeting meeting = mapToMeeting(meetingDto);
meeting.setVisitor(visitor);
meeting.setEmployees(meetingDto.getEmployees());
meetingRepository.save(meeting);
}
}
And my template for GetMapping:
<!DOCTYPE html>
<html lang="en" xmlns:th="http://www.thymeleaf.org" xmlns="http://www.w3.org/1999/html">
<head>
<meta charset="UTF-8">
<title>Create a meeting</title>
<link rel="stylesheet"
href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css"
integrity="sha384-B0vP5xmATw1+K9KRQjQERJvTumQW0nPEzvF6L/Z6nronJ3oUOFUFpCjEUQouq2+l"
crossorigin="anonymous" />
</head>
<body>
<div class="container">
<h3>Create a meeting</h3>
<hr/>
<form action="#" th:action="@{/visitors/new-meeting}" th:object="${meeting}" method="post">
<p>Content: <input type="text" id="content" name="content" th:field="*{contentOfMeeting}" placeholder="Content"></p>
<p th:if="${#fields.hasErrors('contentOfMeeting')}" class="text-danger" th:errors="*{contentOfMeeting}"></p>
<p>Start of meeting: <input type="datetime-local" id="start" name="start" th:field="*{startOfMeeting}" placeholder="Start of meeting"></p>
<p th:if="${#fields.hasErrors('startOfMeeting')}" class="text-danger" th:errors="*{startOfMeeting}"></p>
<p>End of meeting: <input type="datetime-local" id="end" name="email" th:field="*{endOfMeeting}" placeholder="End of meeting"></p>
<p></pd><span th:if="${#fields.hasErrors('endOfMeeting')}" th:errors="*{endOfMeeting}" class="text-danger">End of meeting can not be before start of meeting</span></p>
<label>Employees: </label>
<p>To select more than one employee please click ctrl</p>
<select id="employee" class="form-control" th:field="${employeeList}" multiple name="employeeList">
<option th:each="employee : ${employeeList}" th:value="${employee.id}" th:text="${employee.name + ' ' + employee.surname}"></option>
</select>
<br>
<p><input type="submit" value="Submit"/></p>
<br>
<h6><a th:href="@{/logout}">Logout</a></h6>
<br>
</form>
</div>
</body>
</html>
Could you be so kind to take a look and help me to solve the issue?
I tried refactor template and controller but the problem still exist.
| Spring boot + JPA problem with post mapping form where is select multiple | While reviewing your code, I noticed a potential issue with how the relationships are handled in JPA/Hibernate. When you're dealing with related entities, in this case Meeting and Employee, it's crucial to manage both sides of the relationship correctly.
In your code, you're assigning employees to a meeting using meeting.setEmployees(meetingDto.getEmployees());. This is correct, but depending on your relationship setup, it may not be sufficient. You might also need to set the meeting to each employee. For example, you could iterate over each employee and add the meeting:
List<Employee> employees = meetingDto.getEmployees();
for(Employee employee : employees) {
employee.getMeetings().add(meeting); // Assumes a getMeetings() method in Employee class
}
This snippet adds the current meeting to the list of meetings for each employee. When you save your meeting, the related employees should also be updated.
Of course, this suggestion is based on common practice when using JPA/Hibernate, and the specific implementation may need adjustment according to your actual entity configuration. It's important to ensure the relationship between Meeting and Employee entities is set correctly, with appropriate cascading settings. You might need to set CascadeType.PERSIST or CascadeType.MERGE to make sure the changes to the employees are stored when saving the meeting.
If the problem persists, it would be helpful to take a closer look at the parts of your Employee and Meeting entities that define their relationship. This would allow for a more precise solution to your problem.
Revised Answer
The challenge seems in correctly assigning only the selected employees to the meeting, rather than all the employees as currently happens.
From the look of your form, it seems likely that only the IDs of the selected employees are being sent to the server when the form is submitted. So, we should adjust your MeetingDto to hold a list of these IDs. Here's how:
public class MeetingDto {
// Other fields...
private List<Long> employeeIds; // Replaced from List<Employee> employees
// Remaining fields...
}
Next, we can modify the createMeeting method within your MeetingService to handle these employee IDs:
@Override
public void createMeeting(Long visitorId, MeetingDto meetingDto) {
Visitor visitor = visitorRepository.findById(visitorId).orElse(null);
Meeting meeting = mapToMeeting(meetingDto);
meeting.setVisitor(visitor);
List<Employee> selectedEmployees = employeeRepository.findAllById(meetingDto.getEmployeeIds()); // Retrieve employees by their IDs
meeting.setEmployees(selectedEmployees);
meetingRepository.save(meeting);
}
Lastly, we need to ensure your form is sending the IDs of the selected employees. Your select element in the form should be modified to look like this:
<select id="employee" class="form-control" th:field="*{employees}" multiple name="employeeList">
<option th:each="employee : ${employeeList}" th:value="${employee.id}" th:text="${employee.name + ' ' + employee.surname}"></option>
</select>
With these alterations, your form will be transmitting the IDs of the selected employees to the server. Then, your service can retrieve the relevant employees based on these IDs from the database. As a result, only the selected employees will be associated with the meeting.
|
76382806 | 76383084 | I a df like this
my_df <- data.frame(
b1 = c(2, 6, 3, 6, 4, 2, 1, 9, NA),
b2 = c(100, 4, 106, 102, 6, 6, 1, 1, 7),
b3 = c(75, 79, 8, 0, 2, 3, 9, 5, 80),
b4 = c(NA, 6, NA, 10, 12, 8, 3, 6, 2),
b5 = c(2, 12, 1, 7, 8, 5, 5, 6, NA),
b6 = c(9, 2, 4, 6, 7, 6, 6, 7, 9),
b7 = c(1, 3, 7, 7, 4, 2, 2, 9, 5),
b8 = c(NA, 8, 4, 5, 1, 4, 1, 3, 6),
b9 = c(4, 5, 7, 9, 5, 1, 1, 2, 12))
I wanted to create a new column (NEW) based on the following assumptions.
If b9 is <= 2 write yellow.
If b9 is between 4 and 7 write white.
If b9 is >= 9 write green
The idea is to create something like this.
my_df1 <- data.frame(
b1 = c(2, 6, 3, 6, 4, 2, 1, 9, NA),
b2 = c(100, 4, 106, 102, 6, 6, 1, 1, 7),
b3 = c(75, 79, 8, 0, 2, 3, 9, 5, 80),
b4 = c(NA, 6, NA, 10, 12, 8, 3, 6, 2),
b5 = c(2, 12, 1, 7, 8, 5, 5, 6, NA),
b6 = c(9, 2, 4, 6, 7, 6, 6, 7, 9),
b7 = c(1, 3, 7, 7, 4, 2, 2, 9, 5),
b8 = c(NA, 8, 4, 5, 1, 4, 1, 3, 6),
b9 = c(4, 5, 7, 9, 5, 1, 1, 2, 12),
NEW = c("white", "white", "white", "green", "white", "yellow", "yellow", "yellow", "green"))
I thought something like this will do it, but it didn't.
greater_threshold <- 2
greater_threshold1 <- 4
greater_threshold2 <- 7
greater_threshold3 <- 9
my_df1 <- my_df %>%
mutate(NEW = case_when(b9 <= greater_threshold ~ "yellow", b9 >= greater_threshold1 | b9 <= greater_threshold2 ~ "white", b9 >= greater_threshold3 ~ "green"))
Any help will be appreciated.
| How to group variables that falls within a range of numbers | You can use between from dplyr:
my_df %>%
mutate(NEW = case_when(
b9 <= 2 ~ "Yellow",
between(b9, 4, 7) ~ "white",
b9 >= 9 ~ "green"
))
Output:
b1 b2 b3 b4 b5 b6 b7 b8 b9 NEW
1 2 100 75 NA 2 9 1 NA 4 white
2 6 4 79 6 12 2 3 8 5 white
3 3 106 8 NA 1 4 7 4 7 white
4 6 102 0 10 7 6 7 5 9 green
5 4 6 2 12 8 7 4 1 5 white
6 2 6 3 8 5 6 2 4 1 Yellow
7 1 1 9 3 5 6 2 1 1 Yellow
8 9 1 5 6 6 7 9 3 2 Yellow
9 NA 7 80 2 NA 9 5 6 12 green
Those not falling within the conditions (ie, 8) will be NA
|
76381410 | 76381690 | I have a df where the first little bit looks like:
>dput(df_long_binned_sound2[1:48,])
structure(list(id = c(20230420, 20230420, 20230420, 20230420,
20230420, 20230420, 20230420, 20230420, 20230420, 20230420, 20230420,
20230420, 20230420, 20230420, 20230420, 20230420, 20230424, 20230424,
20230424, 20230424, 20230424, 20230424, 20230424, 20230424, 20230424,
20230424, 20230424, 20230424, 20230424, 20230424, 20230424, 20230424,
20230424, 20230426, 20230426, 20230426, 20230426, 20230426, 20230426,
20230426, 20230426, 20230426, 20230426, 20230426, 20230426, 20230426,
20230426, 20230426), cons_id = c(1L, 2L, 3L, 4L, 5L, 6L, 7L,
8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L, 16L, 16L, 17L, 18L, 19L,
20L, 21L, 22L, 23L, 24L, 25L, 26L, 27L, 28L, 29L, 30L, 31L, 32L,
33L, 34L, 35L, 36L, 37L, 38L, 39L, 40L, 41L, 42L, 43L, 44L, 45L,
46L, 47L), win = c(1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0,
1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0,
1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1), sound = c(1, NA, 1.5,
NA, 2, NA, 2.75, NA, 7, NA, 8, NA, 4, NA, 6.5, NA, NA, 4.5, NA,
6, NA, 2, NA, 5.5, NA, 4.66666666666667, NA, 4.8, NA, 6, NA,
4.5, NA, 3, NA, 2.33333333333333, NA, 6, NA, 1, NA, 1, NA, 1.66666666666667,
NA, 4.5, NA, 5), sound2 = c(NA, 1, NA, 1.5, NA, 1.5, NA, 6, NA,
8, NA, 1, NA, 8, NA, 7, 3, NA, 5, NA, 5, NA, 5, NA, 6.5, NA,
8, NA, 6, NA, 5, NA, 5.66666666666667, NA, 3.5, NA, 2, NA, 2.42857142857143,
NA, 1.5, NA, 2, NA, 8, NA, 2.33333333333333, NA)), row.names = c(NA,
-48L), class = c("tbl_df", "tbl", "data.frame"))
I am running some cross-correlation analysis on it and I would like to save the number outputs of ccf(). I can save all the correlograms using:
ids <- unique(df_long_binned_sound2$id)
for (i in 1:length(ids)){
pdf(file = paste("/Users/myname/Desktop/Current Work/CRTT study - 2022/CRTT - Full/CRTT_r_Full/Wack_A_Mole/CC_CustomBin/CC/plot_", ids[i], ".pdf"),
width = 10, height = 10
)
ccf(df_long_binned_sound2$sound[which(df_long_binned_sound2$id == ids[i])], df_long_binned_sound2$sound2[which(df_long_binned_sound2$id == ids[i])],
na.action = na.pass,
main = paste("Corrected Correlogram \n Pair", ids[i]),
xlim = c(-6, 6)
)
dev.off()
}
and I can print the number outputs using:
for (i in 1:length(ids)){
print(ccf(df_long_binned_sound2$sound[which(df_long_binned_sound2$id == ids[i])],
df_long_binned_sound2$sound2[which(df_long_binned_sound2$id == ids[i])],
na.action = na.pass,
)
)
}
I would like to save the number outputs so that I end up with something like:
id
lag
lag_value
20230420
-9
-0.145
20230420
-8
-0.057
...
id
lag
lag_value
20230420
8
-0.183
20230420
9
-0.203
20230424
-9
0.234
...
I'm sure there is a simple solution but I can't seem to find it. I very optimistically tried and failed with:
df.cff <- data.frame()
for (i in 1:length(ids)){
cff.standin <- ccf(df_long_binned_sound2$sound[which(df_long_binned_sound2$id == ids[i])],
df_long_binned_sound2$sound2[which(df_long_binned_sound2$id == ids[i])],
na.action = na.pass,
)
df.cff <- cbind(df.cff, cff.standin)
}
Error in as.data.frame.default(x[[i]], optional = TRUE, stringsAsFactors = stringsAsFactors) :
cannot coerce class ‘"acf"’ to a data.frame
and:
df.cff <- data.frame()
for (i in 1:length(ids)){
cff.standin <- ccf(df_long_binned_sound2$sound[which(df_long_binned_sound2$id == ids[i])],
df_long_binned_sound2$sound2[which(df_long_binned_sound2$id == ids[i])],
na.action = na.pass,
)
df.cff <- rbind(df.cff, cff.standin)
}
Error in rbind(deparse.level, ...) :
invalid list argument: all variables should have the same length
Does anyone know a good way to save the number outputs of ccf() from a for loop? I am especially interested in a solution that formats the output like the table examples above.
TYIA :)
| saving ccf() looped output in r | You need to inspect the ccf object with View() or checking it's help page:
Value
An object of class "acf", which is a list with the following
elements:
lag A three dimensional array containing the lags at which the acf is
estimated.
acf An array with the same dimensions as lag containing the estimated
acf.
Thus, you just want to do something like:
cbind(id = ids[i], lag = cff.standin$lag, lag_value = cff.standin$acf)
Now for the full solution:
ids <- unique(df_long_binned_sound2$id)
df_ccf <- c() #empty vector to save results
for (i in ids){ #you can pass the ids directly, instead of their index
df1_subset <- df_long_binned_sound2[which(df_long_binned_sound2$id == i),] #saving an extra variable saves space in the call below
ccf_output <- ccf(df1_subset$sound, df1_subset$sound2,
na.action = na.pass,
main = paste("Corrected Correlogram \n Pair", i),
xlim = c(-6, 6)
)
df_ccf <- rbind(df_ccf, cbind(id = i, lag = ccf_output$lag, lag_value = ccf_output$acf)) #iteratively rbind the results
}
But I prefer something using tidyverse:
df_ccf <- df_long_binned_sound2 %>%
group_split(id) %>%
imap_dfr(function(data, index){
ccf(data$sound, data$sound2,
na.action = na.pass,
main = paste("Corrected Correlogram \n Pair", i),
xlim = c(-6, 6)) %>%
{tibble(id = ids[index],
lag = as.numeric(.$lag),
lag_value = as.numeric(.$acf))}
})
|
76382508 | 76383091 | I have a executable used to generate a "cache" file. In CMake, I have something like this:
add_executable(Generator ...)
add_custom_target(OUTPUT cache
DEPENDS Generator OtherDep1 OtherDep2
COMMAND Generator --input OtherDep1 OtherDep2 --output cache)
However, because it takes about 10 minutes and I do not care of the cache differs when Generator changes, I do not want cache to be re-computed whenever Generator is re-linked for whatever reason. But if I remove Generator from the dependencies, it may not be available when the custom commands needs it.
I know this is a bit far from the usual Make/CMake workflow, but is there something I can do to require Generator to have been compiled before running the custom command?
| How to make a CMake custom command depends on a target being built but without rerunning on relink? |
I do not want cache to be re-computed whenever Generator is re-linked for whatever reason.
Then you need to define target-level dependencies instead of file-level ones. Target-level dependencies are defined with add_dependencies command:
add_executable(Generator ...)
# Custom command for **file-level** dependencies.
# The output will be rebuilt whenever it will be found older than one of its dependencies.
add_custom_command(OUTPUT cache
DEPENDS OtherDep1 OtherDep2
COMMAND Generator --input OtherDep1 OtherDep2 --output cache)
# Custom target.
# It just makes sure, that its dependencies are up-to-date.
add_custom_target(create_cache
DEPENDS cache
)
# **target-level** dependencies between the targets.
# They ensures only order, but do not cause rebuilding of one target
# because of another target has been rebuilt
add_dependencies(create_cache Generator)
|
76384961 | 76385001 | I'm trying to setup a sftp server with Apache MINA sshd. But I'm getting subsystem request failed on channel 0 while trying to connect to the server.
sftp -P 22 john@localhost
Password authentication
(john@localhost) Password:
subsystem request failed on channel 0
Connection closed
I was following this document. But I'm not sure whether I'm missing any essential parts here.
Following is the code I'm using at the moment with mina-sshd v2.10.0.
public class Main {
public static void main(String[] args) {
SshServer sshd = SshServer.setUpDefaultServer();
sshd.setPort(22);
sshd.setKeyPairProvider(new SimpleGeneratorHostKeyProvider(Paths.get("hostkey.ser")));
sshd.setShellFactory(new ProcessShellFactory("/bin/sh", "-i", "-l"));
sshd.setCommandFactory(new ScpCommandFactory());
sshd.setPasswordAuthenticator(new MyPasswordAuthenticator());
try {
System.err.println("Starting SSHD on port 22");
sshd.start();
Thread.sleep(Long.MAX_VALUE);
System.err.println("Exiting after a very (very very) long time");
} catch (Exception e) {
e.printStackTrace();
}
}
}
| Unable to connect to Apache MINA sshd server | I think the error is caused by the server not allowing SFTP. If you check the SFTP docs for NIMA, you can see that you can enable the SFTP subsystem like this:
SftpSubsystemFactory factory = new SftpSubsystemFactory.Builder()
//...
.build();
sshd.setSubsystemFactories(Collections.singletonList(factory));
For further diagnosing, you could try creating a custom SftpEventListener and registering it with factory.addSftpEventListener or similar,
|
76382989 | 76383095 | I have 3 functions, how can I plot them using differents intervals ?
This is my code:
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(-5,5,100)
y = 2*x+1
k = 3*x+2
i = 2*x+2
plt.plot(x, y, '-r', label='y=2x+1')
plt.plot(x, k, '-r', label='k =3x+2')
plt.plot(x, i, '-r', label='i =2x+2')
plt.title('3 functions on 3 intervals')
plt.xlabel('x', color='#1C2833')
plt.ylabel('y', color='#1C2833')
plt.legend(loc='upper left')
plt.grid()
plt.show()
Wanted style : 3 intervals, 3 linear functions :
This is what I get :
My wish is to draw the first function 2*x+1 on the following interval [x:2->x:8].
The second function 3*x+2 has to be plotted on the interval [x:8->x:12]
The third function 2*x+2 has to be plotted on the interval [x:12->x:20]
Is it possible ?
Edit :
Ended up with this :
x = np.linspace(-5,0,100)
t = np.linspace(0,5,100)
m = np.linspace(5,10,100)
y = 2*x+1
k = 3*x-2
i = 2*x+2
plt.plot(x, y, '-r', label='y=2x+1')
plt.plot(t, k, '-r', label='k =3x-2')
plt.plot(m, i, '-r', label='i =2x+2')
Result :
| Functions intervals | Is it this that you want?
In [33]: import matplotlib.pyplot as plt
...: import numpy as np
...:
...: xs = [np.linspace(0,4), np.linspace(-3, 5), np.linspace(4, 10)]
...: fs = [np.cos, np.sin, lambda x:np.sin(x)-2*np.cos(x)]
...: for x, f in zip(xs, fs):
...: plt.plot(x, f(x), label=f.__name__)
...: plt.legend()
...: plt.show()
Plotting a linear function is no different,
import matplotlib.pyplot as plt
import numpy as np
xs = [np.linspace(0,4), np.linspace(-3, 5), np.linspace(4, 10)]
fs = [np.cos, np.sin, lambda x:(x-6)*0.5]
fs[-1].__name__ = 'x/2-3'
for x, f in zip(xs, fs):
plt.plot(x, f(x), label=f.__name__)
plt.legend()
plt.show()
If and only if you are going to plot ONLY LINEAR FUNCTIONS,
another approach could be
import matplotlib.pyplot as plt
# plotting y = a x + b
y = lambda xmin, xmax, a, b: (a*xmin+b, a*xmax+b)
format = lambda b: ("y = %.2f x + %.2f"if b>=0 else"y = %.2f x – %.2f")
Xmin = [0, 4, 7]
Xmax = [5, 6, 9]
A = [1, 0.5, 3]
B = [-2, 0, 3]
for xmin, xmax, a, b in zip(Xmin, Xmax, A, B):
plt.plot((xmin, xmax), y(xmin, xmax, a, b),
label=format(b)%(a, abs(b)))
plt.legend()
plt.show()
|
76381322 | 76381691 | I am trying to find all records between two dates, but can't figure out the proper query.
The mapping looks like this
GET my-books-index-1/_mapping
{
"my-books-index-1": {
"mappings": {
"properties": {
"book": {
"properties": {
"bookInfo": {
"properties": {
"publisherInfo": {
"type": "nested",
"properties": {
"publication": {
"properties": {
"publishedOn": {
"type": "date"
}
}
}
}
}
}
}
}
}
}
}
}
}
Following is a sample record for the above mapping
"_source": {
"book": {
"name": "Harry Potter",
"bookInfo": {
"author": "J.K. Rowling",
"publisherInfo": [
{
"price": "100",
"publication": {
"publishedOn": 1685268404000 // [Sunday, May 28, 2023 10:06:44 AM]
}
}
]
}
}
}
[NOTE]: Some additional properties are removed from the mapping sample to keep it short and precise.
I am trying to find all books published between 25th May to 31st May.
Any help is appreciated. Thanks.
| Elasticsearch query for deeply nested field | You can use range query inside of nested path.
PUT test_my-books-index-1
{
"mappings": {
"properties": {
"book": {
"properties": {
"bookInfo": {
"properties": {
"publisherInfo": {
"type": "nested",
"properties": {
"publication": {
"properties": {
"publishedOn": {
"type": "date"
}
}
}
}
}
}
}
}
}
}
}
}
POST test_my-books-index-1/_bulk?refresh
{"index":{"_id":"1"}}
{"book":{"name":"Harry Potter","bookInfo":{"author":"J.K. Rowling","publisherInfo":[{"price":"100","publication":{"publishedOn":1685268404000}}]}}}
dynamic date bigger than 10 days ago
GET test_my-books-index-1/_search
{
"query": {
"bool": {
"must": [
{
"nested": {
"path": "book.bookInfo.publisherInfo",
"query": {
"range": {
"book.bookInfo.publisherInfo.publication.publishedOn": {
"gte": "now-10d",
"lte": "now"
}
}
}
}
}
]
}
}
}
to search with exact date
GET test_my-books-index-1/_search
{
"query": {
"bool": {
"must": [
{
"nested": {
"path": "book.bookInfo.publisherInfo",
"query": {
"range": {
"book.bookInfo.publisherInfo.publication.publishedOn": {
"gte": "25/05/2023",
"lte": "31/05/2023",
"format": "dd/MM/yyyy||yyyy"
}
}
}
}
}
]
}
}
}
another example here: elasticsearch nested range query
|
76380576 | 76381711 | This is my attempt at the problem asked in this thread. When I try to run it with input egg_weights = (1,5,10,25) and n = 99, it seems to run into an infinite loop. The code seems to give the correct answer for smaller n, albeit very slowly. What went wrong here?
def dp_make_weight(egg_weights, target_weight, memo = {}):
if target_weight < 0:
return float('inf')
elif target_weight == 0:
return 0
elif target_weight > 0:
try:
return memo[target_weight]
except:
memo[target_weight] = float('inf')
for weight in egg_weights:
result = dp_make_weight(egg_weights, target_weight - weight, memo = {})
if result < memo[target_weight]:
memo[target_weight] = result + 1
return result + 1
Here's the code that was provided for testing purpose.
if __name__ == '__main__':
egg_weights = (1, 5, 10, 25)
n = 99
print("Egg weights = (1, 5, 10, 25)")
print("n = 99")
print("Expected ouput: 9 (3 * 25 + 2 * 10 + 4 * 1 = 99)")
print("Actual output:", dp_make_weight(egg_weights, n))
print()
| 0/1 Knapsack Problem with Dynamic Programming | If you are planning to call dp_make_weight for different egg weight lists, then the default memo argument should be handled as follows. Also, read my comments in the code:
def dp_make_weight(egg_weights, target_weight, memo=None):
if memo is None:
memo = {}
infinity = float('inf')
if target_weight < 0:
return float('inf')
elif target_weight == 0:
return 0
else: # target_weight must be > 0
if target_weight in memo:
return memo[target_weight]
result = infinity
for weight in egg_weights:
# Only update result when dp_make_weight returns a value smaller than
# the current result value. Also note that the current value of memo
# is what is being passed and not a new, empty dict:
new_result = dp_make_weight(egg_weights, target_weight - weight, memo)
if new_result < result:
result = new_result
result += 1
memo[target_weight] = result
return result
if __name__ == '__main__':
egg_weights = (1, 5, 10, 25)
n = 99
print("Egg weights =", egg_weights)
print("n =", n)
print("Expected ouput: 9 (3 * 25 + 2 * 10 + 4 * 1 = 99)")
print("Actual output:", dp_make_weight(egg_weights, n))
print()
egg_weights = (1, 6, 9, 12, 13, 15)
n = 724
print("Egg weights =", egg_weights)
print("n =", n)
print("Expected ouput: 49")
print("Actual output:", dp_make_weight(egg_weights, n))
Prints:
Egg weights = (1, 5, 10, 25)
n = 99
Expected ouput: 9 (3 * 25 + 2 * 10 + 4 * 1 = 99)
Actual output: 9
Egg weights = (1, 6, 9, 12, 13, 15)
n = 724
Expected ouput: 49
Actual output: 49
|
76381523 | 76381725 | I would like get accumulating weighted-average prices by sym from a table, meaning taking account of not just the previous record but all previous records.
Input
q)show t:([]sym:`a`a`a`b`b;size:(2;6;2;7;5);price:(2;10;3;4;9))
sym size price
--------------
a 2 2
a 6 10
a 2 3
b 7 4
b 5 9
Desired Output:
q)show t:([]sym:`a`a`b`b;size:(2;6;7;5);price:(2;10;4;9);avgPrice:(2;8;4;6.083))
sym size price avgPrice
-----------------------
a 2 2 2
a 6 10 8
a 2 3 7
b 7 4 4
b 5 9 6.083
so for the second row: (2*2+10*6)/(2+6)=8
so for the third row: (2*2+10*6+2*3)/(2+6+2)=7
so for the forth row: (7*4+5*9)/(7+5)=6.083
Any help would be appreciated.
Thanks in advance.
| How can I get accumulating weighted-average prices in KDB+ by symbol from a table, taking into account all previous records? | update avgPrice:(sums price*size)%sums size by sym from t
sym size price avgPrice
-----------------------
a 2 2 2
a 6 10 8
a 2 3 7
b 7 4 4
b 5 9 6.083333
|
76384914 | 76385010 | I have a TableData class:
public class TableData
{
public string ID, WrestlerID;
public string Name;
}
And some data that I then put on a list:
List<TableData> _tableData = new List<TableData>();
TableData tableData = new TableData
{
ID = "0",
WrestlerID = "000",
Name = "test1"
};
_tableData.Add(tableData);
TableData tableData2 = new TableData
{
ID = "1",
WrestlerID = "111",
Name = "test2"
};
_tableData.Add(tableData2);
I then iterate through my _tableData list and add each item on my DataGrid:
foreach (TableData data1 in _tableData)
{
DGTable.Items.Add(data1);
}
BTW Here's my DataGrid:
<DataGrid x:Name="DGTable" Grid.Row="1">
<DataGrid.Columns>
<DataGridTextColumn Header="ID" Binding="{Binding ID}" Width="100"/>
<DataGridTextColumn Header="Name" Binding="{Binding Name}" Width="*"/>
<DataGridTextColumn Header="Wrestler ID" Binding="{Binding WrestlerID}" Width="200"/>
</DataGrid.Columns>
</DataGrid>
When I run the app, the DataGrid displays 2 rows but all fields are empty. Any thoughts? Thanks!
| Items are showing blank in DataGrid | Your TableData class needs to have properties instead of fields to be able use bindings.
It should also implement the INotifyPropertyChanged interface to use observable properties, so that changes to those properties get reflected in the UI.
Change your class as follows:
public class TableData : INotifyPropertyChanged
{
public event PropertyChangedEventHandler PropertyChanged;
private void OnPropertyChanged([CallerMemberName] String propertyName = "")
{
PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
}
private string id;
public string ID
{
get => id;
set
{
if(id == value) return;
id = value;
OnPropertyChanged();
}
}
// repeat for WrestlerID and Name
//...
}
Don't forget to add using System.ComponentModel; at the top.
|
76382920 | 76383096 | How to make it like that so odd indexes will be doing (-) and even indexes will do (+) The max iteration is 6. iteration 1 +10, iteration 2 -20, iteration 3 +30, iteration 4 -40, iteration 5 + 50, iteration 6 -60
AA = np.array([[9.27914]+10,
[9.33246]-20,
[9.26303]+30,
[9.30597]-40,
[9.6594 ]+50,
[9.04283]-60,
[8.88866]+10,
[8.89956]-20])
expected results:
AA=np.array([
[19.27914],
[-10.66754],
[39.26303],
[-30.69403],
[59.6594],
[-50.95717],
[18.88866],
[-11.10044],
])
I try use this code but not working
max_iter = 6
iter = 0
for i in range(len(AA)):
if i % 2 == 0:
AA[i][0] = AA[i][0] + (iter % max_iter)
else:
AA[i][0] = AA[i][0] - (iter % max_iter)
iter += 10
| How to doing dynamic calculation in python | You were very close. Just had to make 3 small changes. I added a +1 inside the parenthesis, added *10 for each of the array operations and change iter += 10 to array += 1
max_iter = 6
iter = 0
for i in range(len(AA)):
if i % 2 == 0:
AA[i][0] = AA[i][0] + (iter % max_iter+1)*10
else:
AA[i][0] = AA[i][0] - (iter % max_iter+1)*10
iter += 1
In fact, you can remove the if else statement and do it in a single line if you use the following:
AA[i][0] = AA[i][0] +(-1)**(i)* (iter % max_iter+1)*10
|
76384866 | 76385024 | I am trying to set this input control using $input_group.find('input'); but it is not getting set. Is this the correct way to use find and then set the value of the input control or is there anyway to do this?
var $container = $('#gridrow-field-container');
var template = $('#gridrow-template-input-group').get(0);
$(item.MeetingPollingPartsValues).each((indexPartsValues, PartsValues) => {
var $input_group = $(template.content.cloneNode(true));
var inputControl = $input_group.find('input');
inputControl.val(PartsValues.QuestionValue);
console.log(inputControl);
console.log($input_group);
$container.append($input_group);
$('input_group').val(PartsValues.QuestionValue);
});
<template id="gridrow-template-input-group">
<div class='row mb-3' id='newrowItem_1'>
<div class="input-group">
<input type='text' id='fieldrowItem_1' name='name[]' class='form-control fieldrowItem mb-3' placeholder="Row 1" data-value="0" >
<span id='spanrowItem_1' class="input-group-addon" style="cursor:pointer;" onclick="RemoveRow(this)" >
<i class="fa fa-remove" style="color:#CDCDCD"></i>
</span>
</div>
</div>
</template>
| Set value of input control from input_group | I added Bootstrap 5 dependencies and fixed the template.
You can clone the contents of the template with:
const $inputGroup = $template.contents().clone();
const $container = $('#gridrow-field-container');
const $template = $('#gridrow-template-input-group');
const RemoveRow = (span) => {
$(span).closest('.row').remove();
}
const item = {
MeetingPollingPartsValues: [
{ QuestionValue: 'One' },
{ QuestionValue: 'Two' },
{ QuestionValue: 'Three' }
]
};
$(item.MeetingPollingPartsValues).each((index, partValue) => {
const $inputGroup = $template.contents().clone();
const $inputControl = $inputGroup.find('input');
$inputControl.val(partValue.QuestionValue);
$inputControl.attr('placeholder', `Row ${index + 1}`);
$inputControl.attr('data-value', partValue.QuestionValue);
$container.append($inputGroup);
});
<link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-9ndCyUaIbzAi2FUVXJi0CjmCapSmO7SnpJef0486qhLnuZ2cdeRhO02iuK6FUUVM" crossorigin="anonymous">
<link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.4.0/css/all.min.css" rel="stylesheet"/>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.bundle.min.js" integrity="sha384-geWF76RCwLtnZ8qwWowPQNguL3RmwHVBC9FhGdlKrxdiJJigb/j/68SIy3Te4Bkz" crossorigin="anonymous"></script>
<template id="gridrow-template-input-group">
<div class="row">
<div class="input-group mb-3">
<input type="text" name="name[]" class="form-control"
placeholder="Row x" data-value="0" >
<div class="input-group-text" style="cursor:pointer;" onclick="RemoveRow(this)">
<i class="fa fa-remove" style="color:#CDCDCD"></i>
</div>
</div>
</div>
</template>
<div id="gridrow-field-container" class="container"></div>
|
76384474 | 76385026 | I have been trying to add multiple entries on the search bar of the renderdt table function on shiny.
for example on the following code, instead of having a new search bar, i want to modify the one which is inbuilt in renderDT and allow it to take multiple entries, comma separated; for example setosa,virginica should bring rows with both setosa and virginica. I found solutions to add a new search bar but i wanted to know if i can modify this one accordingly. Any help regarding this would be highly appreciated.
if (interactive()) {
library(shiny)
library(DT)
shinyApp(
ui = fluidPage(fluidRow(column(12, DTOutput('tbl')))),
server = function(input, output) {
output$tbl = renderDT(
iris, options = list(lengthChange = FALSE)
)
}
)
}
i tried something like this, but this adds another search bar option and that is unnecessary
if (interactive()) {
library(shiny)
library(DT)
shinyApp(
ui = fluidPage(
fluidRow(DTOutput('tbl'))
),
server = function(input, output) {
output$tbl = renderDT({
data <- iris
searchItems <- unlist(strsplit(input$search, ",")) # Split input string by commas
searchItems <- trimws(searchItems) # Remove leading/trailing whitespace
filteredData <- data[data$Species %in% searchItems, ]
datatable(filteredData, options = list(lengthChange = FALSE))
})
}
)
}
| How can I modify the inbuilt search bar of RenderDT in R Shiny to allow multiple entries separated by commas? | You can use this code:
library(shiny)
library(DT)
callback <- function(sep) {
sprintf('
$("div.search").append($("#mySearch"));
$("#mySearch").on("keyup redraw", function(){
var splits = $("#mySearch").val().split("%s").filter(function(x){return x !=="";})
var searchString = "(" + splits.join("|") + ")";
table.search(searchString, true).draw(true);
});
', sep)
}
ui <- fluidPage(
tags$head(tags$style(HTML(".search {float: right;}"))),
br(),
tags$input(type = "text", id = "mySearch", placeholder = "Search"),
DTOutput("dtable")
)
server <- function(input, output){
output[["dtable"]] <- renderDT({
datatable(
iris[c(1, 2, 51, 52, 101, 102),],
options = list(
dom = "l<'search'>rtip"
),
callback = JS(callback(","))
)
}, server = FALSE)
}
shinyApp(ui, server)
Personally I prefer the search builder:
datatable(
iris[c(1, 2, 51, 52, 101, 102),],
extensions = "SearchBuilder",
options = list(
dom = "Qlfrtip",
searchbuilder = TRUE
)
)
|
76383038 | 76383142 | I'm trying to write a test involving the filesystem. I chose to use pyfakefs and pytest for writing these tests. When I was trying to write and then read from the fake filesystem, I couldn't seem to get any tests to work. So, I wrote a simple test to ensure that pyfakefs was reading the right value:
def test_filesystem(fs):
with open("fooey.txt", "w+") as my_file:
my_file.write("Hello")
read = my_file.read(-1)
assert os.path.exists("fooey.txt")
assert "Hello" in read
The first assertion passes. The second one fails. When I debug, read has a value of ''. I'm struggling to understand what's going on here. Does file writing or reading not work within pyfakefs? Am I doing something wrong?
| Pytestfs write then read doesn't return expected value | def test_filesystem(fs):
with open("fooey.txt", "w") as my_file:
my_file.write("Hello")
with open("fooey.txt", "r") as my_file:
read = my_file.read()
assert os.path.exists("hoklh\\fooey.txt")
assert "Hello" in read
This should do it!
|
76381054 | 76381777 | I want to place a numpy array in a cell of a pandas dataframe.
For specific reasons, before assigning the array to the cell, I add another column in the same dataframe, whose values are set to NaN.
Can someone help me understand what adding the column with the nans does to my data frame, why breaks the code, and how I can fix it?
Inserting an array into a column works:
import pandas as pd
import numpy as np
#%% this works as expected
df = pd.DataFrame([0, 1, 2, 3, 4], columns=['a'])
df['a'] = df['a'].astype(object)
df.loc[4, 'a'] = np.array([5, 6, 7, 8])
df
But after inserting the column with nans, the same code breaks and I get the following error:
ValueError: Must have equal len keys and value when setting with an iterable
#%% after adding a second column, x, filled with nan, the code breaks
df = pd.DataFrame([0, 1, 2, 3, 4], columns=['a'])
df['x'] = np.nan
df['a'] = df['a'].astype(object)
df.loc[4, 'a'] = np.array([5, 6, 7, 8])
df
Finally, I want to add the array to the new column, but I get the same error.
#%% this is what I want to do, breaks, too
df = pd.DataFrame([0, 1, 2, 3, 4], columns=['a'])
df['x'] = np.nan
df['x'] = df['x'].astype(object)
df.loc[4, 'x'] = np.array([5, 6, 7, 8])
df
| Adding numpy array to Pandas dataframe cell results in ValueError | If you only need to set a single cell, use at:
df.at[4, 'a'] = np.array([5, 6, 7, 8])
|
76381509 | 76381781 | I have a script which outputs an excel file '.xlsx' containing various data. It generates a file with the date in the name in one folder, and then generates a copy, using shutil.copy(), in a separate folder. I then rename the file using os.rename(), however instead of overwriting the file already there, it produces the following error:
"FileExistsError: [WinError 183] Cannot create a file when that file already exists:"
I need the file to be exactly the same name everytime the script is run, as it is subsequently used as an automatic input file for PowerBI.
Current code is as follows:
# Select file you want to copy & where to copy it to
src_file = vb.output_path
destination = vb.path_reports_cashflowcopy
# Copy the file
shutil.copy(src_file, destination)
# Define copy path
cashflow_copy_path = vb.ROOT_DIR + '\\Data_and_Reports\\Reports\\Cashflow Copies\\'
# Rename the file
os.rename(cashflow_copy_path + str(date.today()) + ' - Lettings Report BETA - ' + vb.academic_year_selection + '.xlsx',
cashflow_copy_path + 'Copy ' + vb.academic_year_selection + '.xlsx')
src_file is the original output excel file created.
vb.xxxxxx is from a connected script which describes various file paths and other inputs used to name the files.
| How can I overwrite a file in a different folder using shutil.copy() and os.rename() in Python? | To avoid the "FileExistsError" when renaming the file, you can check if the destination file already exists before renaming it.
import os
import shutil
from datetime import date
# Select file you want to copy & where to copy it to
src_file = vb.output_path
destination = vb.path_reports_cashflowcopy
# Copy the file
shutil.copy(src_file, destination)
# Define copy path
cashflow_copy_path = vb.ROOT_DIR + '\\Data_and_Reports\\Reports\\Cashflow Copies\\'
# Rename the file
new_file_name = cashflow_copy_path + 'Copy ' + vb.academic_year_selection + '.xlsx'
existing_file = cashflow_copy_path + str(date.today()) + ' - Lettings Report BETA - ' + vb.academic_year_selection + '.xlsx'
if os.path.exists(existing_file):
os.remove(existing_file) # Delete the existing file if it exists
os.rename(src_file, new_file_name)
|
76385031 | 76385105 | I have an ngFor loop set up like this:
<div *ngFor="let record of this.RecordsProcessed; let i = index">
<div class="row my-16" data-test='test'_{{i}}>
<div class="col-4">Id:</div>
<div class="col-8">{{record?.Id}}</div>
</div>
</div>
I want to put the index from ngFor on the data-text tag within the html. Is it possible to do something like this within the html?
| How to make the index from ngFor part of an html tag value | Try like this:
<div *ngFor="let record of this.RecordsProcessed; let i = index">
<div class="row my-16" [attr.data-test]="'test_' + i">
<div class="col-4">Id:</div>
<div class="col-8">{{record?.Id}}</div>
</div>
</div>
[] brackets let angular know that everything inside of "" is typescript code.
|
76385070 | 76385109 | I am trying to decipher this code (MurmurHash) and came across the following lines:
switch (remainder) {
case 3: k1 ^= (key.charCodeAt(i + 2) & 0xff) << 16;
case 2: k1 ^= (key.charCodeAt(i + 1) & 0xff) << 8;
case 1: k1 ^= (key.charCodeAt(i) & 0xff);
// When is this executed?
k1 = (((k1 & 0xffff) * c1) + ((((k1 >>> 16) * c1) & 0xffff) << 16)) & 0xffffffff;
k1 = (k1 << 15) | (k1 >>> 17);
k1 = (((k1 & 0xffff) * c2) + ((((k1 >>> 16) * c2) & 0xffff) << 16)) & 0xffffffff;
h1 ^= k1;
}
My question is as follows: I have never seen code inside a switch statement that is not part of either a case or a default and would greatly appreciate it if someone could explain when the part after the last case statement is supposed to get executed.
Is it an alternative way of writing a default statement?
Or will this always get executed, just as if it were written outside of the switch block?
Information on this topic seems very difficult to come by as documentation on switch statements generally deals with case and default, and it's also impossible to test without changing the code too much which might affect its behavior.
Thanks in advance!
| JavaScript - Code after case in switch statements | The code is part of case 1.
Personally I'd re-arrange the whitespace to be more clear:
switch (remainder) {
case 3:
k1 ^= (key.charCodeAt(i + 2) & 0xff) << 16;
case 2:
k1 ^= (key.charCodeAt(i + 1) & 0xff) << 8;
case 1:
k1 ^= (key.charCodeAt(i) & 0xff);
k1 = (((k1 & 0xffff) * c1) + ((((k1 >>> 16) * c1) & 0xffff) << 16)) & 0xffffffff;
k1 = (k1 << 15) | (k1 >>> 17);
k1 = (((k1 & 0xffff) * c2) + ((((k1 >>> 16) * c2) & 0xffff) << 16)) & 0xffffffff;
h1 ^= k1;
}
The point is that, without a break; in any of the cases, any matched case will execute and then control will flow to the next case.
So, assuming remainder can only be 1, 2, or 3...
If it's 3, all statements are executed.
If it's 2, case 3 is skipped but the rest is executed.
If it's 1, case 3 and case 2 are skipped but the rest is executed.
The logic is, perhaps a bit unintuitively, relying on the control flow of switch to continue on to the next (not-actually-matching) case.
|
76382771 | 76383156 | I have a data set:
df<- structure(list(Depth = c(6.83999999999997, 8.56, 4.64999999999998,
8.83999999999997, 6.56, 8.64999999999998, 12.21, 11.82, 5.41000000000003,
11.63, 9.41000000000003, 11.26, 8.95999999999998, 10.81, 10.68,
12.74, 14.06, 8.16000000000003, 12.31, 10.76, 10.74, 1, 9.38,
5, 4, 12, 6.70999999999998, 8.56, 14.65, 16.71, 12.56, 18.65,
20.21, 11.82, 13.41, 13.63, 13.41, 13.26, 22.96, 14.81, 20.74,
30.06, 30.16, 32.31, 32.21, 14.76, 14.74, 4.66000000000003, 10,
4, 15, 8.70999999999998, 32.65, 26.21, 29.82, 29.41, 5.63, 23.41,
29.26, 2.95999999999998, 2.81, 2.68000000000001, 2.74000000000001,
2.06, 2.16000000000003, 2.31, 4.20999999999998, 8.75999999999999,
2.74000000000001, 18.66, 3, 4, 20, 6.83999999999997, 1, 6.64999999999998,
6.20999999999998, 1.81999999999999, 1.41000000000003, 3.63, 3.41000000000003,
5.25999999999999, 2.95999999999998, 2.81, 1, 2.74000000000001,
4.06, 4.16000000000003, 4.31, 4.20999999999998, 2.75999999999999,
2.74000000000001, 1, 5, 3, 4.70999999999998, 2.56, 2.64999999999998,
10.21, 7.81999999999999), NEAR_DIST = c(18.77925552, 18.30180262,
61.36019078, 179.2770495, 10.43166516, 17.9171804, 46.20571245,
31.99340507, 10.43166516, 26.7170903, 24.47782541, 33.08965222,
27.27138524, 43.4212158, 46.0670014, 50.11661352, 47.39692573,
64.4374351, 49.66872737, 12.12884673, 15.13068812, 25.02246826,
10.46189005, 13.46373164, 16.89230952, 13.51981867, 32.50661183,
38.24201162, 38.5502434, 82.06185032, 49.57486607, 90.64395203,
83.61730031, 49.74483449, 397.2686612, 53.49338859, 68.02475678,
59.6583949, 130.7528811, 67.27058895, 111.2988217, 347.3593823,
220.5169227, 268.5649787, 194.9220113, 84.48739079, 57.1344938,
24.35529161, 54.84148996, 18.74063124, 66.63864028, 203.7119682,
829.3788162, 309.4190672, 395.4959263, 326.7671063, 35.65309711,
264.2374189, 307.025746, 23.02085763, 26.3683775, 22.93486062,
25.28307029, 15.49632807, 14.59667995, 13.36925569, 11.9476145,
152.7517309, 11.30381957, 74.36911773, 3.773174432, 6.825998674,
79.40020637, 38.8451901, 3.853365482, 34.8719427, 38.02805106,
21.06138328, 20.76016614, 37.60511548, 25.71672169, 41.9543577,
26.1675823, 26.1675823, 16.49388675, 29.12695505, 29.12695505,
25.21064884, 27.6250245, 25.21064884, 21.06138328, 18.59893184,
11.08799823, 19.92747995, 16.25210115, 18.52964249, 5.582718512,
10.11944373, 56.29794875, 36.03064946), Season2 = structure(c(3L,
3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 4L, 4L, 1L,
1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 3L,
3L, 3L, 3L, 4L, 4L, 4L, 4L, 4L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L,
2L, 2L, 3L, 3L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 4L, 4L, 1L, 1L,
1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 4L,
4L, 4L, 4L, 4L, 4L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 3L, 3L,
3L, 3L, 3L), levels = c("Winter", "Spring", "Summer", "Fall"), class = c("ordered",
"factor"))), row.names = c(NA, -100L), class = c("tbl_df", "tbl",
"data.frame"))
and am running a gam with the data:
library(mgcv)
library(gratia)
gam<-gam(Depth~s(NEAR_DIST)+Season2,data=df)
even though the Season2 variable is ordered:
unique(df$Season2)
[1] Summer Fall Winter Spring
Levels: Winter < Spring < Summer < Fall
when I call:
draw(parametric_effects(gam))
The order of the x-axis is alphabetical.
How can I get the x-axis to match the order of my factor here? The old version of gratia used to do this. I have: version 0.8.1.34
| Getting the correct order of factors for parametric terms in gratia::draw(parametric_effects()) | As a workaround or fix for your issue you could set the order using the limits argument of scale_x_discrete:
library(mgcv)
#> Loading required package: nlme
#> This is mgcv 1.8-42. For overview type 'help("mgcv-package")'.
library(gratia)
library(ggplot2)
packageVersion("gratia")
#> [1] '0.8.1.34'
gam <- gam(Depth ~ s(NEAR_DIST) + Season2, data = df)
draw(parametric_effects(gam)) +
ggplot2::scale_x_discrete(limits = levels(df$Season2))
|
76383137 | 76383166 | I want to create simple react component with form inside, every time that user click Submit button, data should be send to the server.
For testing purposes I created component with two buttons, one stand alone and one inside form.
return (
<>
<button onClick={submit}>Working example</button>
<form style={cellStyle}
onSubmit={submit}>
<div>
<input type="submit"
value={reactDict["send"]}
className="btn btn-success"
style={{ margin: "15px 5px 5px 0px" }} />
</div>
</form>
</>
);
Both of them use same "submit" function
const submit = (event) => {
event.preventDefault();
const requestOptions = {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ test: 'some message' })
};
fetch(`neededitems`, requestOptions)
.then(async response => {
console.log(response);
})
.catch(error => {
console.error('There was an error!', error);
});
}
When I click button outside form everything works fine ,
When I click submit button inside form I got an error .
| Why I can't send data using submit button inside Form | Add the attribute method="post" to the element.
|
76384759 | 76385111 | I am translating Rust code into C#.
Rust has some type called Box<> which is said to be C++'s equivalent of smart pointers.
What should be the C# equivalent of Rust's Box<> type?
| What should be the C# equivalent of Rust's Box<> type? | The short answer is that there isn't an exactly-equivalent type.
Box is used to put a value on the heap. There are several reasons why you would want to do that, but that's all it does.
In C#, reference types always point to a heap allocation. "Boxing" is therefore pointless on reference types, as they are already effectively boxed.
C# value types are instead allocated directly, either on the stack or directly inside an object's heap allocation if used as a non-static class field. You can box a C# value type by casting it to a compatible reference type: either object or an interface type implemented by the value type.
However, reference types in C# are not just boxed, they also have shared ownership as well as shared mutability and so C# reference types are closer to the Rust types that implement those behaviors, such as Rc<RefCell<T>> or Arc<Mutex<T>>, though there are still very relevant semantic differences between those types and C# reference types (Rc<_> can't be sent to other threads, Arc<Mutex<_>> has to be locked to access the inner value, both Rc and Arc can create reference cycles that could cause a memory leak, etc.). Shared mutability in particular requires some kind of synchronization/atomicity to even pass the Rust compiler, where C# has no problem letting you create data races.
In other words, you need to look at why the value is boxed.
Is it to enable polymorphism on a set of heterogeneous values (Box<dyn _>)? Just use C# interfaces.
Is it to enable a recursive structure? Just use C# classes, which can self-recurse without issue.
|
76383032 | 76383179 | This is a secondary question as I thought my previously answered question was resolved.
Here is my use case:
Customer (office) buys physical products. We collect the
information on the mobile app and then the server creates a Stripe
Customer and a PaymentIntent. This succeeds, as evidenced by
Stripe portal
When the payment is finalized, my web hook event captures the “charge.succeeded”
event and it is my understanding that now that I have a
paymentMethod I can set it up pay automatically with the confirm and
redirect-url. However, no attempt by me has been successful.
I then create a subscriber, and I want to use the above customer payment
method to manage the subscription payment. The payment for this
shows as incomplete, and I have to manually confirm it.
Here is how I am handling the server side:
Create payment Intent:
Stripe.apiKey = API_SECRET_KEY;
long totalCharge = calcTotalCharge(purchaseRequest.getRequestedProducts());
PaymentIntentCreateParams paymentIntentCreateParams = PaymentIntentCreateParams.builder()
.setCustomer(customer.getId())
.setAmount(totalCharge)
.setCurrency("usd")
.setDescription(OFFICE_PURCHASE)
.setSetupFutureUsage(SetupFutureUsage.OFF_SESSION)
.setAutomaticPaymentMethods(PaymentIntentCreateParams.AutomaticPaymentMethods.builder()
.setEnabled(true)
.build())
.build();
PaymentIntent paymentIntent = PaymentIntent.create(paymentIntentCreateParams);
SetupIntentCreateParams setupIntentParms =
SetupIntentCreateParams.builder()
.setCustomer(customer.getId())
.addPaymentMethodType("card")
.build();
SetupIntent setupIntent = SetupIntent.create(setupIntentParms);
This all appears to be correct. I use the paymentIntento with the Stripe Elements to complete the order. I I cannot set the confirm or auto payment because I don’t have the payment method at this point.
Webhook event - this throws an exception: java.lang.RuntimeException: com.stripe.exception.InvalidRequestException: Received unknown parameters: enabled, returnUrl, confirm; code: parameter_unknown; request-id: req_My6nCQVFVNbsSgtry
try {
PaymentIntent paymentIntent = PaymentIntent.retrieve(charge.getPaymentIntent());
Map<String, Object> automaticPaymentMethods = new HashMap<>();
automaticPaymentMethods.put("enabled", true);
automaticPaymentMethods.put("confirm", true);
automaticPaymentMethods.put("returnUrl", "https://cnn.com”); <== this is just for Stripe requirement, it does nothing
logger.info("webhook updating paymentIntent.automatic payment method as {} ", paymentIntent);
} catch (StripeException e) {
throw new RuntimeException(e);
}
So where I appear to be stuck is how do I set the customer paymentMethod to be applied as confirm automatically since the subscriber will not have the ability to confirm the payment. I also was uncertain about a custom URL scheme or an universal link, despite the links you provided.
Update to answer responses:
Webhook does this when customer payment is received:
(I am no longer trying to set the automaticPaymentMethods)
PaymentIntent paymentIntent = PaymentIntent.retrieve(charge.getPaymentIntent());
paymentIntent.getAutomaticPaymentMethods().setEnabled(true);
String paymentMethod = charge.getPaymentMethod();
String customerId = charge.getCustomer();
Long chargeAmount = charge.getAmountCaptured();
// now we can update the pending order with the paymentMethod
try {
Customer customer = Customer.retrieve(customerId);
customer.update(CustomerUpdateParams.builder()
.setInvoiceSettings(InvoiceSettings.builder()
.setDefaultPaymentMethod(paymentMethod)
.build())
.build());
} catch (StripeException se) {
logger.error("unable to customer {} the paymentMethod {}", customerId, paymentMethod);
}
| Stripe Customer paymentMethod applied to its subscribers | In order to create Subscriptions with customer's attached payment method, you need to set it as default payment method for the customer. Specifically on customer.invoice_settings.default_payment_method parameter
Once you do that, the subscription should charge the default payment method on creation.
For your second question, I don't fully understand what you're trying to do exactly. Automatic Payment Methods parameter on a PaymentIntent only supports enabled property. So not sure why you're trying to set confirm and returnUrl there. Are you following a guide for this?
|
76381709 | 76381792 | How to convert a jupyter notebook to a python script with cell delimiters (#%%)?
I've already checked nbconvert , but it doesn't seem to have the one. Also, the same question found, the answer doesn't satisfy the need because actual raw source codes of jupyter notebook isn't structured as such. (It'd be better to be able to convert at once, instead of converting with nbconvert first and then pattern matching)
Any tools recommended? Or could you share a script to achieve this?
| How to convert a jupyter notebook to a python script with cell delimiters (#%%)? | That looks similar to the percent delimiter that Jupytext handles, see the top few commands here also. The specific commands I'm referencing:
jupytext --to py:percent notebook.ipynb # convert notebook.ipynb to a .py file in the double percent format
jupytext --to py:percent --opt comment_magics=false notebook.ipynb # same as above + do not comment magic commands
See the bottom of the percent format section for more about that last command and further customization options.
|
76385046 | 76385123 | Python - How to make current script iterate through list of words instead of one string/word only?
I am very new to python, and have put together a script parsing different scripts i've looked at.
The goal is to return all possible variants of a list of keywords, replacing the characters by leet code (e.g.: 'L33T' or 'l337' instead of 'Leet')
I have been able to achieve this for one string/word only, but I wish to be able to input a list of keywords and obtain the same results.
This is my first time using Stack overflow, and I would really appreciate any help you can provide me :)
Here is my code:
import itertools
def leet(word):
leet_matches = [['a','@','4','∆','Д','а','а','a','à'],
['b','8','b','ḃ','ḅ','ḇ'],
['c','<','{','[','(','©'],
['d','d','ď','ḋ','ḍ','ḏ','ḑ','ḓ'],
['e','3','£','₤','€','е'],
['f','7','ƒ','ḟ'],
['g','9','[','-','6','ĝ','ğ','ġ','ģ','ǧ','ǵ','ḡ'],
['h','4','#','ĥ','ȟ','ḣ','ḥ','ḧ','ḩ','ḫ','ẖ'],
['i','1','|','!','ì','í'],
['j','√','ĵ','ǰ'],
['k','ķ','ǩ','ḱ','ḳ','ḵ','ķ','ǩ','ḱ','ḳ','ḵ'],
['l','1','|','ĺ','ļ','ľ','ḷ','ḹ','ḻ','ḽ'],
['m','м','ḿ','ṁ','ṃ'],
['n','И','и','п','ñ','ń','ņ','ň','ǹ','ṅ','ṇ','ṉ','ṋ'],
['o','0','Ø','Θ','о','ө','ò','ó','ô','õ','ö','ō','ŏ','ő','ơ','ǒ','ǫ','ǭ'],
['p','р','ṕ','ṗ'],
['q','9','(',')','0'],
['r','Я','®','ŕ','ŗ','ř','ȑ','ȓ','ṙ','ṛ','ṝ','ṟ'],
['s','5','$','§','ś','ŝ','ş','š','ș','ṡ','ṣ','ṥ','ṧ','ṩ'],
['t','7','+','т','ţ','ť','ț','ṫ','ṭ','ṯ','ṱ','ẗ'],
['u','ù','ú','û','ü','ũ','ū','ŭ','ů','ű','ų','ư','ǔ','ǖ','ǘ'],
['v'],
['w','Ш','ŵ','ẁ','ẃ','ẅ','ẇ','ẉ','ẘ'],
['x','×','%','*','Ж','ẋ','ẍ'],
['y','¥','Ч','ү','у','ṽ'],
['z','5','ź','ż','ž','ẑ']]
l = []
for letter in word:
for match in leet_matches:
if match[0] == letter:
l.append(match)
return list(itertools.product(*l))
word = "hola"
test_list = leet(word)
def remove(string):
return string.replace(" ", "")
res = [''.join(tups) for tups in test_list]
print (str(res)+remove(str(res)))
import csv
with open ('leet_latinalphabet.csv', mode ='w') as csvfile:
fieldnames = ['leet variants']
writer = csv.DictWriter(csvfile,fieldnames=fieldnames)
writer.writeheader()
writer.writerow({"leet variants":str(res)[1:-1].replace("'","")})
| How do I use itertools in Python to generate all possible variants of a list of keywords with leet code? | Loop over the list of words, calling leet() on each word.
words = ['hola', 'some', 'other', 'word']
with open ('leet_latinalphabet.csv', mode ='w') as csvfile:
fieldnames = ['word', 'leet variants']
writer = csv.DictWriter(csvfile,fieldnames=fieldnames)
writer.writeheader()
for word in words:
row = {"word": word, "leet variants": ",".join(leet(word))}
writer.writerow(row)
|
76382337 | 76383224 | I am using Google code scanner Android MLKit for Barcode scanning. I am using below dependencies. I want the use bundled model so that initialisation time is not taken when app is launched. Is there a way can I use bundled version of model :
Please find below dependencies I used for this :
implementation 'com.google.android.gms:play-services-code-scanner:16.0.0'
AndroidManifest:
When I used the above dependencies , I see below exception during downloading the model:
Waiting for the Barcode UI module to be downloaded.
Is there a way can I use bundled version of model so that I need not wait for Barcode UI module to be downloaded. Please help me regarding this
Thanks in Adavance.
| ml-kit - barcode-scanning android - Google code scanner | What about this:
dependencies {
// ...
// Use this dependency to bundle the model with your app
implementation 'com.google.mlkit:barcode-scanning:17.1.0'
}
Found at: https://developers.google.com/ml-kit/vision/barcode-scanning/android
|
76381596 | 76381793 | I have rest controller with token creation call. Here inside ObjectNode I get big json data. The database column is varchar2(4000) nad I want limit this ObjectNode size to 4000 adding validation at controller level. Not sure how to do this?
data class TokenRequest(
@NotEmpty(message = "id is mandatory")
open val id: String,
@NotEmpty(message = "gameId is mandatory")
open val game: String,
@NotEmpty(message = "gameType is mandatory")
open val type: String,
@NotEmpty(message = "gameDate is mandatory")
open val date: String,
@NotEmpty(message = "coupon is mandatory")
open val token: ObjectNode,
)
class TokenController {
fun createToken(@Valid @RequestBody request: TokenRequest): Token {
val now = Token.generateNowTimestamp()
val token = Token.fromTokenRequest(request, now, now, request.teamId)
return tokenService.create(token)
}
}
| Spring Rest Validation ObjectNode data size limit | It sounds like you're trying to cap the size of the JSON data contained in the 'token' field of your request. You want it to be no more than 4000 characters, right? There's actually a way to handle this in Kotlin by creating your own validation annotation. Here's how:
First, you need to create the annotation itself:
@Target(AnnotationTarget.FIELD)
@Retention(AnnotationRetention.RUNTIME)
@MustBeDocumented
@Constraint(validatedBy = [JsonNodeLengthValidator::class])
annotation class MaxJsonLength(
val message: String = "JSON Object is too big",
val groups: Array<KClass<*>> = [],
val payload: Array<KClass<out Payload>> = [],
val value: Int = 4000
)
Then, we'll make a custom validator for it:
import com.fasterxml.jackson.databind.node.ObjectNode
import javax.validation.ConstraintValidator
import javax.validation.ConstraintValidatorContext
class JsonNodeLengthValidator : ConstraintValidator<MaxJsonLength, ObjectNode> {
private var maxLength: Int = 0
override fun initialize(annotation: MaxJsonLength) {
this.maxLength = annotation.value
}
override fun isValid(node: ObjectNode?, context: ConstraintValidatorContext): Boolean {
return node?.toString()?.length ?: 0 <= maxLength
}
}
Finally, we'll use our shiny new validator annotation in your data class:
data class TokenRequest(
@NotEmpty(message = "id is mandatory")
open val id: String,
@NotEmpty(message = "gameId is mandatory")
open val game: String,
@NotEmpty(message = "gameType is mandatory")
open val type: String,
@NotEmpty(message = "gameDate is mandatory")
open val date: String,
@NotEmpty(message = "coupon is mandatory")
@MaxJsonLength(value = 4000, message = "Token JSON object is too big")
open val token: ObjectNode,
)
So there you have it! This makes sure that your TokenRequest validation will fail if the JSON string of token goes beyond 4000 characters. If it does, you'll get a validation error. Hope this helps!
|
76384241 | 76385126 | Have following YAML
image:
repository: "test.com/test"
pullPolicy: IfNotPresent
tag: "abc"
JAVA code to modify the YAKL file
public class SnakeYaml1 {
public static void main(String[] args) throws FileNotFoundException {
// TODO Auto-generated method stub
InputStream inputStream = new FileInputStream(new File("C:\\yaml\\student1.yaml"));
Yaml yaml = new Yaml(new Constructor(Values1.class));
Values1 data = yaml.load(inputStream);
Image image = new Image();
image.setPullPolicy("update");
data.setImage(image);
DumperOptions options = new DumperOptions();
options.setIndent(2);
options.setDefaultFlowStyle(DumperOptions.FlowStyle.FLOW);
options.setIndicatorIndent(2);
options.setIndentWithIndicator(true);
PrintWriter writer = new PrintWriter(new File("C:\\yaml\\student1.yaml"));
Yaml yaml1 = new Yaml(new Constructor(Values1.class));
yaml1.dump(data, writer);
}
}
public class Values1 {
private Image image;
public Image getImage() {
return image;
}
public void setImage(Image image) {
this.image = image;
}
}
public class Image {
private String repository;
private String pullPolicy;
private String tag;
public Image()
{
}
public Image (String repository, String pullPolicy, String tags)
{
super();
this.repository = repository;
this.pullPolicy = pullPolicy;
this.tag = tags;
}
public String getRepository() {
return repository;
}
public void setRepository(String repository) {
this.repository = repository;
}
public String getPullPolicy() {
return pullPolicy;
}
public void setPullPolicy(String pullPolicy) {
this.pullPolicy = pullPolicy;
}
public String getTag() {
return tag;
}
public void setTag(String tag) {
this.tag = tag;
}
}
AFter executing the java code , the YAML format is getting changed
YAML format after executing JAVA code
!!oe.kubeapi.abc.Values1
image: {pullPolicy: update, repository: null, tag: null}
Expected YAML format after execution of java code
image:
repository: "test.com/test"
pullPolicy: update
tag: "abc"
Not getting why the YAML format is getting changed after executing java code. Is this the bug in SnakeYaml ??
I tried putting property image in List format as well , List<Image> image still it did not work
please suggest . what should be done . Any help please ?
| Not able to format YAML using SnakeYaml keeping original way | Well, you mentioned it is SnakeYaml lib, so I wonder have you ever looked through its documentation ?
Your code works as it should.
try:
DumperOptions options = new DumperOptions();
options.setDefaultFlowStyle(DumperOptions.FlowStyle.BLOCK);
Yaml yaml = new Yaml(options);
|
76378684 | 76383239 | I'm new to Harbor registry. I was asked to propose an architecture for harbor in my company. I proposed at first to use an architecture based on proxy cache. But the CISO refused to use proxy cache for the entreprise without saying why. I proposed anoter architecture based on replication. We validate some base images that are pulled from public registries and pushed into our harbor registry ( One active harbor that pulls the images from internet and another passive harbor for high avalibility + 4 other harbors that leaves in special network zones (they get the images form the master harbor)).
The question is why the ciso refused the use of proxy cache ? is there any drawbacks for using it ? what are the security risks that can appear using the harbor proxy cache vs replication ? I cant find in the internet clear informations about this question. It seems that the majority is using proxy cache.
Thank you!
| Harbor registry proxy cache vs replication | At this stage one can only speculate, about the unprofessional behavior of not explaining the reasons and also for not asking.
Regarding Harbor proxy and replication, the main difference between both option is the difference of threat surface and its control.
Proxy
Passive, forwards requests upstream if not found locally.
No control,
Replication
Active, explicitly specify the images you want to copy from upstream
Full control
|
76383101 | 76383250 | I have data that looks like this:
dataframe_1:
week SITE LAL SITE LAL
0 1 BARTON CHAPEL 1.1 PENASCAL I 1
1 2 BARTON CHAPEL 1.1 PENASCAL I 1
2 3 BARTON CHAPEL 1.1 PENASCAL I 1
And, i need the final dataframe to look like this:
dataframe_2:
week SITE LAL
0 1 BARTON CHAPEL 1.1
1 2 BARTON CHAPEL 1.1
2 3 BARTON CHAPEL 1.1
3 1 PENASCAL I 1
4 2 PENASCAL I 1
5 3 PENASCAL I 1
I've tried using 'melt' but I cannot get the desire result. Perhaps I'm using the wrong approach?
thank you,
| Reshaping a Dataframe with repeating column names | Not a very generalizable solution, but will work on your example:
df.groupby('week').apply( lambda _df : pd.concat((_df.iloc[:,1:3], _df.iloc[:,3:5]))).reset_index('week')
it groups by week and then reshapes with column selection + concatenation. Removing a superfluous index column in the end.
|
76381742 | 76381799 | I created a popup that appears when I click a button, but to make it disappear I have to click again. Is there a way to set a timer and make it disappear?
Function:
// When the user clicks on div, open the popup
function myFunction() {
var popup = document.getElementById("myPopup");
popup.classList.toggle("show");
}
Style:
.popuptext {
display: none;
}
.popuptext.show {
display: block;
}
The HTML:
<div class="popup" onclick="myFunction()">Click me to toggle the popup!
<span class="popuptext" id="myPopup">A Simple Popup!</span>
</div>
I need the popup to close after 10 seconds OR when the user clicks somewhere else.
I edited the code to below and it does close after 10 seconds, how to achieve the second part (close when user clicks somewhere else):
function myFunction() {
var popup = document.getElementById("myPopup");
popup.classList.toggle("show");
if(popup.classList.contains("show"))
setTimeout(() => popup.classList.remove("show"), 10000)
}
| How to set a timeout for a popup and close if user clicks elsewhere? | To do this you need to:
Define a function, hide() that hides the popup.
Add an mousedown event listener to the whole document that invokes hide
Within hide, ensure that the click event's target is not contained in the popup.
Set up the timeout to call hide
Important: Have hide clear the created timeout and remove the listener that was added.
function myFunction() {
var popup = document.getElementById("myPopup");
popup.classList.add('show')
let timeout;
function hide(e) {
if (popup.contains(e.target)) return;
popup.classList.remove("show");
document.removeEventListener('mousedown', hide);
clearTimeout(timeout)
}
document.addEventListener('mousedown', hide)
timeout = setTimeout(hide, 10000)
}
.popuptext {
display: none;
}
.popuptext.show {
display: block;
}
<div class="popup" onclick="myFunction()">Click me to toggle the popup!
<span class="popuptext" id="myPopup">A Simple Popup!</span>
</div>
|
76382658 | 76383281 | i am making a three-d array the problem i am facing is i want to create multiple 3-d array however with varying size of row and column so the first matrix size could be 0-2-2 while next matrix could be say 1-1-3 so on..
kindly do not suggest making a large matrix that could have value of all the row and columns.
i personally have tried using structure to create the code, i have defined 2-d array( for row and column) in the structure and then stored it in variable e[1].array(2-d), i have used for loop to continuously change value of row and column in array based on user input, the problem i am facing is every time the for loop changes value to next the code over writes itself hence previous values of array can not be called so if for first matrix the size of row and column was 2-2 and next is 1-3 so e[1].x[2][2] have some value then for second loops e[1].x[1][3] the dimensions of x have been re-defined hence i could not call x[2][2].
kindly suggest ways i could store 3-d array with different size of row and column for each matrix.
int main()
{
int matrix;
printf("ENTER NUMBER OF MATRICES\n");
scanf("%d`", &matrix);
int row, column;
for (int m = 0; m < matrix; m++) {
printf("ENTER NUMBER OF ROWS IN %d MATRICE\n", m + 1);
scanf("%d", &row);
printf("ENTER NUMBER OF column IN %d MATICE\n", m + 1);
scanf("%d", &column);
struct x {
int f[row][column];
};
struct x k[m];
int g = 1;
printf("ENTER ALL ELEMENT LEFT TO RIGHT\n");
for (int j = 0; j < row; j++) {
for (int u = 0; u < column; u++) {
printf("%d)\n", g);
scanf("%d", &k[m].f[j][u]);
g = g + 1;
}
}
for (int s = 0; s < row; s++) {
printf("|");
for (int l = 0; l < column; l++) {
printf("%d", k[m].f[s][l]);
if (l < column - 1) {
printf(" ");
}
}
printf("|\n");
}
printf("%d", k[0].f[0][0]); // error here calling over written
value
}
return 0;
expectation:
ENTER NUMBER OF MATRICES
2
ENTER NUMBER OF ROWS IN 1 MATRICE
2
ENTER NUMBER OF column IN 1 MATICE
2
ENTER ALL ELEMENT LEFT TO RIGHT
1)
1
2)
2
3)
3 4
4)
4
|1 2|
|3 4|
ENTER NUMBER OF ROWS IN 2 MATRICE
1
ENTER NUMBER OF column IN 2 MATICE
3
ENTER ALL ELEMENT LEFT TO RIGHT
1)
5
2)
6
3)
7
|5 6 7|
1 // error here in output
| 3-d array with different size of row and column | Since each matrix may be a different size, you should manage each matrix separately and record its dimensions separately. The code below shows how to use a structure type to do that.
Common C implementations support variable length arrays, so you use this to make addressing the matrix elements simpler. The program below shows how to use a pointer to a variable length array to point to the rows of a matrix. The memory address is recorded in the structure using void *, because we cannot have a variably modified type in the structure, but a separate variable is declared where it is needed, and it is set to the start of memory for the matrix.
In the absence of using variable length arrays, you would have to address the elements using manual arithmetic calculations into a linear array.
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
// Read the number of matrices.
int NMatrices;
printf("Enter the number of matrices: ");
if (1 != scanf("%d", &NMatrices))
{
fprintf(stderr, "Error, scanf for number of matrices failed.\n");
exit(EXIT_FAILURE);
}
if (NMatrices < 0)
{
fprintf(stderr, "Error, number of matrices is negative.\n");
exit(EXIT_FAILURE);
}
// Define a type to manage a matrix.
struct MatrixInformation
{
void *Memory; // Memory for the matrix.
int NRows, NColumns; // Number of rows and number of columns.
};
// Allocate memory to manage NMatrices matrices.
struct MatrixInformation *Matrices = malloc(NMatrices * sizeof *Matrices);
if (!Matrices)
{
fprintf(stderr, "Error, failed to allocate memory.\n");
exit(EXIT_FAILURE);
}
// Read each matrix.
for (int m = 0; m < NMatrices; ++m)
{
// Read the number of rows and the number of columns of this matrix.
int NRows, NColumns;
printf("Enter the number of rows in matrix %d: ", m+1);
if (1 != scanf("%d", &NRows))
{
fprintf(stderr, "Error, scanf for number of rows failed.\n");
exit(EXIT_FAILURE);
}
if (NRows <= 0)
{
fprintf(stderr, "Error, number of rows is not positive.\n");
exit(EXIT_FAILURE);
}
printf("Enter the number of columns in matrix %d: ", m+1);
if (1 != scanf("%d", &NColumns))
{
fprintf(stderr, "Error, scanf for number of columns failed.\n");
exit(EXIT_FAILURE);
}
if (NColumns <= 0)
{
fprintf(stderr, "Error, number of columns is not positive.\n");
exit(EXIT_FAILURE);
}
// Create a temporary pointer for the matrix and allocate memory.
int (*Matrix)[NColumns] = malloc(NRows * sizeof *Matrix);
if (!Matrix)
{
fprintf(stderr, "Error, failed to allocate memory.\n");
exit(EXIT_FAILURE);
}
// Save the numbers of rows and columns and the memory address.
Matrices[m].NRows = NRows;
Matrices[m].NColumns = NColumns;
Matrices[m].Memory = Matrix;
// Get the values for the matrix elements.
for (int r = 0; r < NRows; ++r)
for (int c = 0; c < NColumns; ++c)
{
printf("Enter the element [%d, %d]: ", r+1, c+1);
if (1 != scanf("%d", &Matrix[r][c]))
{
fprintf(stderr, "Error, scanf for element failed.\n");
exit(EXIT_FAILURE);
}
}
}
// Print each matrix.
for (int m = 0; m < NMatrices; ++m)
{
printf("Matrix %d:\n", m+1);
// Get the numbers of rows and columns and the memory address.
int NRows = Matrices[m].NRows;
int NColumns = Matrices[m].NColumns;
int (*Matrix)[NColumns] = Matrices[m].Memory;
// Print each row.
for (int r = 0; r < NRows; ++r)
{
// Start each row with a delimiter and no spaces.
printf("|%d", Matrix[r][0]);
// Print each element with two spaces for separation.
for (int c = 1; c < NColumns; ++c)
printf(" %d", Matrix[r][c]);
// Finish each row with a delimiter and a new-line character.
printf("|\n");
}
}
// Free the memory of each matrix.
for (int m = 0; m < NMatrices; ++m)
free(Matrices[m].Memory);
// Free the memory for the array of structures about the matrices.
free(Matrices);
}
|
76385092 | 76385129 | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 400 entries, 0 to 399
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 CompPrice 400 non-null int64
1 Income 400 non-null int64
2 Advertising 400 non-null int64
3 Population 400 non-null int64
4 Price 400 non-null int64
5 ShelveLoc 400 non-null object
6 Age 400 non-null int64
7 Education 400 non-null int64
8 Urban 400 non-null object
9 US 400 non-null object
10 HighSales 400 non-null object
dtypes: int64(7), object(4)
memory usage: 34.5+ KB
As shown in the info() result above, there are 11 columns indexed from 0 to 10 in my dataset, DF. Now, I would like to extract only the first 10 columns (that are the columns with the indices 0 to 9). However, when I try to use the code below:
DF.iloc[:, 0:9]
It returns only the first 9 columns (that is, from CompPrice to Urban).
In this case, I need to change my code to:
DF.iloc[:, 0:10]
to get what I actually want (that is, from CompPrice to US).
I'm really confused by iloc() indices. Why it requires '10' instead '9' but starts with the index '0'. The starting and ending indices are not consistent.
| Index of .iloc API in Pandas | What you are observing is the standard functionality of pandas. If you look in the documentation, you can find the definition. This is intended and logical, as Python lists function the same way. As per the docs:
.iloc is primarily integer position based (from 0 to length-1 of the axis), but may also be used with a boolean array. .iloc will raise IndexError if a requested indexer is out-of-bounds, except slice indexers which allow out-of-bounds indexing. (this conforms with Python/NumPy slice semantics).
|
76380777 | 76381845 | We are middle of upgrading ruby versions v2.7.3 -> v3.1.3
One of our test cases are failing related to valid ipv6 address string, check the following
# ruby 2.7.3
IPAddr.new('fe80::85e:7530:69ec:9074%en0').ipv6?
=> IPAddr::InvalidAddressError (invalid address: fe80::85e:7530:69ec:9074%en0)
# ruby 3.1.3
IPAddr.new('fe80::85e:7530:69ec:9074%en0').ipv6?
=> true
Is it really a bug or am I missing something? Please help..
| Ruby IPAddr class accepting wrong IPv6 address string |
Is it really a bug or am I missing something?
This used to be an issue in the ipaddr default gem up to version 1.2.2 which was fixed in version 1.2.3 in order to be fully compliant with RFC 4007 and RFC 6874. Version 1.2.3 of the ipaddr default gem was shipped as part of Ruby 3.1.0.
So, you are correct. This is a bug (although opinions differ on that) which was fixed in ipaddr 1.2.3 / Ruby 3.1.0.
|
76383210 | 76383298 | When using pandas.date_range with start date, frequency, and periods the date range rounds up when using the start date as the last day of a month.
It seems like a silent edge case bug. If it's not a bug, any idea why it does that?
For example
import pandas as pd
start_date = pd.Timestamp(2023, 5, 31)
date_range = pd.date_range(start=start_date, freq="MS", periods=6)
results in
DatetimeIndex(['2023-06-01', '2023-07-01', '2023-08-01', '2023-09-01',
'2023-10-01', '2023-11-01'],
dtype='datetime64[ns]', freq='MS')
From the documentation, I'd expect it to start in May and end in October:
DatetimeIndex(['2023-05-01', '2023-06-01', '2023-07-01', '2023-08-01', '2023-09-01',
'2023-10-01'],
dtype='datetime64[ns]', freq='MS')
I thought it had to do with the inclusive argument but that's not the reason either.
| Why does pandas `date_range` rounds up to the next month? | pd.date_range is to generate a range of date between start and end. 2023-05-01 is less than start date 2023-05-31, it will never reach it. To do what you want, you can replace the day of pd.Timestamp by 1.
start_date = pd.Timestamp(2023, 5, 31)
date_range = pd.date_range(start=start_date.replace(day=1), freq="MS", periods=6)
print(date_range)
DatetimeIndex(['2023-05-01', '2023-06-01', '2023-07-01', '2023-08-01',
'2023-09-01', '2023-10-01'],
dtype='datetime64[ns]', freq='MS')
|
76383232 | 76383300 | I am using the following code to persist utms across my website. However i notice that its adding a question mark to links even without the UTM parameters. Can someone help me figure out what in this code needs to change. It should only be trying to add UTM parameters to the links if there is one present in the URL.
<script>
(function() {
var domainsToDecorate = [
'example.com'
],
queryParams = [
'utm_medium', //add or remove query parameters you want to transfer
'utm_source',
'utm_campaign',
'utm_content',
'utm_term'
]
// do not edit anything below this line
var links = document.querySelectorAll('a');
// check if links contain domain from the domainsToDecorate array and then decorates
for (var linkIndex = 0; linkIndex < links.length; linkIndex++) {
for (var domainIndex = 0; domainIndex < domainsToDecorate.length; domainIndex++) {
if (links[linkIndex].href.indexOf(domainsToDecorate[domainIndex]) > -1 && links[linkIndex].href.indexOf("#") === -1) {
links[linkIndex].href = decorateUrl(links[linkIndex].href);
}
}
}
// decorates the URL with query params
function decorateUrl(urlToDecorate) {
urlToDecorate = (urlToDecorate.indexOf('?') === -1) ? urlToDecorate + '?' : urlToDecorate + '&';
var collectedQueryParams = [];
for (var queryIndex = 0; queryIndex < queryParams.length; queryIndex++) {
if (getQueryParam(queryParams[queryIndex])) {
collectedQueryParams.push(queryParams[queryIndex] + '=' + getQueryParam(queryParams[queryIndex]))
}
}
return urlToDecorate + collectedQueryParams.join('&');
}
// a function that retrieves the value of a query parameter
function getQueryParam(name) {
if (name = (new RegExp('[?&]' + encodeURIComponent(name) + '=([^&]*)')).exec(window.location.search))
return decodeURIComponent(name[1]);
}
})();
</script>
| What is the bug in my Persistent UTM Code? | In decorateUrl you are adding the ? if there is not one
urlToDecorate = (urlToDecorate.indexOf('?') === -1) ? urlToDecorate + '?' : urlToDecorate + '&';
I would suggest you may only want to do this if collectedQueryParams contains any elements
function decorateUrl(urlToDecorate) {
var collectedQueryParams = [];
for (var queryIndex = 0; queryIndex < queryParams.length; queryIndex++) {
if (getQueryParam(queryParams[queryIndex])) {
collectedQueryParams.push(queryParams[queryIndex] + '=' + getQueryParam(queryParams[queryIndex]))
}
}
if(collectedQueryParams.length == 0){
return urlToDecorate;
}
//only add the ? if we have params AND if there isn't already one
urlToDecorate = (urlToDecorate.indexOf('?') === -1) ? urlToDecorate + '?' : urlToDecorate + '&';
return urlToDecorate + collectedQueryParams.join('&');
}
|
76385033 | 76385142 | I have some Gujarati string but its in ISCII encoding, so python throughing error (SyntaxError: invalid decimal literal).
string = TFH[TZDF\ I]GF.8[0 G[Xg;
line 1
string = TFH[TZDF\ I]GF.8[0 G[Xg;
^
SyntaxError: unexpected character after line continuation character
I was tried byte encoding too, but its not giving output like ISCII encoding.
I am trying this to make ISCII into unicode for Gujarati language.
I have ISCII based font and character map data also.
ISCII input string: TFH[TZDF\ I]GF.8[0 G[Xg;
Desired unicode output: તાજેતરમાં યુનાઇટેડ નેશન્સ (Typed using gujarati phonetic keyboard)
| How can I convert ISCII encoding to unicode for Gujarati language in Python 3? | If you just want to write the string literal, for me, just writing print("તાજેતરમાં યુનાઇટેડ નેશન્સ") worked.
Or you could write:
characters = [2724, 2750, 2716, 2759, 2724, 2736, 2734, 2750, 2690, 32, 2735, 2753, 2728, 2750, 2695, 2719, 2759, 2721, 32, 2728, 2759, 2742, 2728, 2765, 2744]
string = str()
for c in characters:
string += chr(c)
Maybe you have a look at this conversion script:
https://gist.github.com/pathumego/81672787807c23f19518c622d9e7ebb8
|
76381570 | 76381846 | I have vcf file like this:
##bcftools_annotateVersion=1.3.1+htslib-1.3.1
##bcftools_annotateCommand=annotate
#CHROM POS ID REF ALT QUAL FILTER INFO FORMAT HG005
chr1 817186 rs3094315 G A 50 PASS platforms=2;platformnames=Illumina,CG;datasets=3;datasetnames=HiSeq250x250,CGnormal,HiSeqMatePair;callsets=5;callsetnames=HiSeq250x250Sentieon,CGnormal,HiSeq250x250freebayes,HiSeqMatePairSentieon,HiSeqMatePairfreebayes;datasetsmissingcall=IonExome,SolidSE75bp;callable=CS_HiSeq250x250Sentieon_callable,CS_CGnormal_callable,CS_HiSeq250x250freebayes_callable;AN=2;AF=1;AC=2 GT:PS:DP:ADALL:AD:GQ 1/1:.:809:0,363:78,428:237
chr1 817341 rs3131972 A G 50 PASS platforms=3;platformnames=Illumina,CG,Solid;datasets=4;datasetnames=HiSeq250x250,CGnormal,HiSeqMatePair,SolidSE75bp;callsets=6;callsetnames=HiSeq250x250Sentieon,CGnormal,HiSeq250x250freebayes,HiSeqMatePairSentieon,HiSeqMatePairfreebayes,SolidSE75GATKHC;datasetsmissingcall=IonExome;callable=CS_HiSeq250x250Sentieon_callable,CS_CGnormal_callable,CS_HiSeq250x250freebayes_callable;AN=2;AF=1;AC=2 GT:PS:DP:ADALL:AD:GQ 1/1:.:732:1,330:99,391:302
I need to extract ID column and AN from INFO column to get:
ID INFO
rs3094315 2
rs3131972 2
I'm trying something like this awk '/^[^#]/ { print $3, gsub(/^[^AN=])/,"",$8)}' file.vcf, but still not getting the desired result.
| Extracting vcf columns substring with awk | You can try this awk:
awk 'BEGIN{OFS="\t"}
/^##/{next}
/^#/{print $3,$8; next}
{
split($8,a,";")
for(i=1;i<=length(a);i++) if (a[i]~/^AN=/) {sub(/^AN=/,"",a[i]); break}
printf "%s%s%s\n", $3, OFS, a[i]
}
' file
With the example, prints:
ID INFO
rs3094315 2
rs3131972 2
|
76384930 | 76385186 | I'm using the following code to create a video player for detected reference images in AR session. Currently I display a placeholder video and after 1 second switch to real video that I want played. However, I would like to show the placeholder video until the real video is ready to be played.
I tried experimenting with AVAsset and observing the playable status based on this: Knowing when AVPlayer object is ready to play - however I didn't have any success.
func createVideoNode(_ target: ARReferenceImage) -> ModelEntity {
var videoPlane = ModelEntity()
var targetName: String = ""
if let name = target.name,
let validURL = URL(string: "https://testdomain.com/\(name).mp4") {
targetName = name
// Use the preloaded placeholder asset to create an AVPlayer
if let placeholderAsset = parent.placeholderAsset {
let placeholderPlayer = AVPlayer(playerItem: AVPlayerItem(asset: placeholderAsset))
let videoMaterial = VideoMaterial(avPlayer: placeholderPlayer)
videoPlane = ModelEntity(mesh: .generatePlane(width: Float(target.physicalSize.width), depth: Float(target.physicalSize.height)), materials: [videoMaterial])
placeholderPlayer.play()
DispatchQueue.global(qos: .background).async {
let videoPlayer = AVPlayer(url: validURL)
NotificationCenter.default.addObserver(forName: .AVPlayerItemDidPlayToEndTime, object: videoPlayer.currentItem, queue: .main) { [weak videoPlayer] _ in
videoPlayer?.seek(to: CMTime.zero)
videoPlayer?.play()
}
DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) {
let videoMaterial = VideoMaterial(avPlayer: videoPlayer)
videoPlane.model?.materials = [videoMaterial]
videoPlayer.play()
self.parent.videoPlayers[targetName] = videoPlayer
print (target.name as Any)
}
}
} else {
fatalError("Failed to load placeholder video asset.")
}
}
return videoPlane
}
| Load AVAsset video in the background and replace playing placeholder video once it's playable in Swift and RealityKit | The key to resolving this issue is making sure the AVPlayer's item is actually ready to play before switching the video. You can use the Key-Value Observing (KVO) on the AVPlayerItem's status property to get notified when it's ready to play.
Here is the updated createVideoNode(_:) function:
func createVideoNode(_ target: ARReferenceImage) -> ModelEntity {
var videoPlane = ModelEntity()
var targetName: String = ""
if let name = target.name,
let validURL = URL(string: "https://testdomain.com/\(name).mp4") {
targetName = name
// Use the preloaded placeholder asset to create an AVPlayer
if let placeholderAsset = parent.placeholderAsset {
let placeholderPlayer = AVPlayer(playerItem: AVPlayerItem(asset: placeholderAsset))
let videoMaterial = VideoMaterial(avPlayer: placeholderPlayer)
videoPlane = ModelEntity(mesh: .generatePlane(width: Float(target.physicalSize.width), depth: Float(target.physicalSize.height)), materials: [videoMaterial])
placeholderPlayer.play()
DispatchQueue.global(qos: .background).async {
let asset = AVAsset(url: validURL)
let playerItem = AVPlayerItem(asset: asset)
let videoPlayer = AVPlayer(playerItem: playerItem)
// Observe the status of playerItem.
playerItem.addObserver(self, forKeyPath: "status", options: .new, context: nil)
NotificationCenter.default.addObserver(forName: .AVPlayerItemDidPlayToEndTime, object: videoPlayer.currentItem, queue: .main) { [weak videoPlayer] _ in
videoPlayer?.seek(to: CMTime.zero)
videoPlayer?.play()
}
self.parent.videoPlayers[targetName] = videoPlayer
}
} else {
fatalError("Failed to load placeholder video asset.")
}
}
return videoPlane
}
// Add this method to handle observed value change
override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {
if keyPath == "status" {
if let playerItem = object as? AVPlayerItem, playerItem.status == .readyToPlay {
DispatchQueue.main.async { [weak self] in
if let videoPlane = self?.videoPlane {
let videoMaterial = VideoMaterial(avPlayer: playerItem.player)
videoPlane.model?.materials = [videoMaterial]
playerItem.player?.play()
}
}
}
}
}
This version of the function now creates an AVPlayerItem using the AVAsset. It then adds the ViewController as an observer of the playerItem's status property. The observeValue(forKeyPath:of:change:context:) method gets called when the status changes. When the status is .readyToPlay, it switches the video on the main queue.
Please note that the observeValue method is a standard method for classes that inherit from NSObject, make sure your class does that. Also remember to remove the observer when it's no longer needed.
You will also have to hold a strong reference to your AVPlayerItem and AVPlayer in order to observe changes. This might necessitate some architectural changes (adding properties to your class).
This solution should give you a general direction, but you might need to adjust it to fit your specific project setup and requirements.
|
76383308 | 76383309 | Openai provides an api which allows you to implement AI services such as ChaGPT or DAL-E.
For Ruby on Rails application, and there are couple of gems available, obe of them being ruby-openai.
It works very well, but the only problem is that it doesn't come with the stream conversation feature, meaning that you can only send one question request at a time without any history tracking of the conversation. In other words, the api forgets every question you asked after having sent the reply.
So how can we fix this?
| ruby-openai api gem in Ruby on Rails: how to implement a streaming conversation? | Basically you need to implement the whole behaviour yourself. Here are all the implementation step, including the implementation of the dal-e ai with a response with several pictures rather then just one.
You can also find my whole repository HERE and clone the app!!!
IMPLEMENTING A STREAM CONVERSATION FEATURE
Basic implementation
Check out Doug Berkley's Notion Page for basic implementation of the API
Implement a streaming conversation
By default the openai gem does not come with that feature, hence having to implement it yourself
Create your database with 3 tables (conversations, questions, answers) with thw following sctructure:
# schema.rb
ActiveRecord::Schema[7.0].define(version: 2023_05_29_194913) do
create_table "answers", force: :cascade do |t|
t.text "content"
t.integer "question_id", null: false
t.datetime "created_at", null: false
t.datetime "updated_at", null: false
t.index ["question_id"], name: "index_answers_on_question_id"
end
create_table "conversations", force: :cascade do |t|
t.text "initial_question"
t.datetime "created_at", null: false
t.datetime "updated_at", null: false
t.text "historic"
end
create_table "questions", force: :cascade do |t|
t.text "content"
t.integer "conversation_id", null: false
t.datetime "created_at", null: false
t.datetime "updated_at", null: false
t.index ["conversation_id"], name: "index_questions_on_conversation_id"
end
add_foreign_key "answers", "questions"
add_foreign_key "questions", "conversations"
end
Routes
Rails.application.routes.draw do
root "pages#home" # supposes that you have a pages controller with a home action
resources :conversations, only: [:create, :show]
post "question", to: "conversations#ask_question"
end
Home page view (with just a button that redirects to the create conversation action -- see bellow)
<h1>Let's talk</h1>
<%= button_to "Create New Conversation", conversations_path, method: :post, class: "btn btn-primary my-3" %>
Controller app/controllers/conversations_controller.rb
class ConversationsController < ApplicationController
def create
@convo = Conversation.create
redirect_to conversation_path(@convo)
end
def show
@convo = Conversation.find(params[:id])
end
def ask_question
@question = Question.new(content: params[:entry])
conversation = Conversation.find(params[:conversation])
@question.conversation = conversation
@question.save
if conversation.historic.nil?
response = OpenaiService.new(params[:entry]).call
conversation.historic = "#{@question.content}\n#{response}"
else
response = OpenaiService.new("#{conversation.historic}\n#{params[:entry]}").call
conversation.historic += "\n#{@question.content}\n#{response}"
end
conversation.save
@answer = Answer.create(content: response, question: @question)
redirect_to conversation_path(conversation)
end
end
Show page app/views/conversations/show.html.erb
<h1>This is your conversation</h1>
<p>Ask your question</p>
<form action="<%= question_path %>", method="post">
<input type="hidden" name="conversation" value="<%= @convo.id %>">
<textarea rows="5" cols="33" name="entry"></textarea>
<input type="submit" class="btn btn-primary">
</form>
<br>
<ul>
<% @convo.questions.each do |question| %>
<li>
Q: <%= question.content.capitalize %> <%= "?" if question.content.strip.last != "?" %>
</li>
<li>
A: <%= question.answers.first.content %>
</li>
<% end %>
</ul>
<%= link_to "Back", root_path %>
rails s and test :)
Resources:
https://github.com/OGsoundFX/ruby-open-ai
https://doug-berkley.notion.site/doug-berkley/Rails-ChatGPT-Service-Object-Setup-21748fc969514b978bf6345f897b6d3e
https://github.com/alexrudall/ruby-openai
Going Further:
https://gist.github.com/alexrudall/cb5ee1e109353ef358adb4e66631799d
|
76381726 | 76381855 | I want to write integration tests with shared context (shared state) for all testcases.
From docs:
When using a class fixture, xUnit.net will ensure that the fixture instance will be created before any of the tests have run, and once all the tests have finished, it will clean up the fixture object by calling Dispose, if present.
It follows from docs that I need to use IClassFixture. Ok then.
I create sample ASP .NET Core Web API with controllers and in Program.cs add the only line:
public partial class Program { }
Nothing else is changed in a project.
Then I add xUnit test project where I add reference for my web api project and modify the default UnitTest1 class with the following code:
public class UnitTest1 : IClassFixture<WebApplicationFactory<Program>>
{
private readonly HttpClient _client;
private string? _val;
public UnitTest1(WebApplicationFactory<Program> factory)
{
_client = factory.CreateClient();
}
[Fact]
public void Test1()
{
Assert.Null(_val);
_val = "smth";
}
[Fact]
public void Test2()
{
Assert.NotNull(_val);
}
}
So basically I want to set "shared context" (which is a string variable in this case) in Test1 and use it in Test2. I run testcases and I see that Test1 passes and Test2 fails.
I have seen xUnit IClassFixture constructor being called multiple times and tried using test explorer window or even switch to Rider but that did not help. Did someone encounter such a behavior?
| xUnit IClassFixture reinitialized for every testcase | This is working correctly, but you have implemented it wrong. xUnit runtime will create a new instance of UnitTest1 for every test execution, but it should only create a single instance of WebApplicationFactory<Program> for the lifetime of the current test batch execution context for this test class.
Your _val variable is not defined as part of the test fixture at all, so that makes sense that the value is not persisted across the different tests.
Because you are passing the factory, and not the instance, you will experience multiple calls to factory.CreateClient(); and this is expected. In this scenario you wouldn't normally use a factory as the test fixture, but your test fixture could use the factory method internally:
/// <summary>Fixture to share across many tests in the same context</summary>
public class MyTestFixture : IDisposable
{
public HttpClient Client { get; private set; }
public string? Value { get; set; }
public MyTestFixture(WebApplicationFactory<Program> factory)
{
Client = factory.CreateClient();
}
public void Dispose()
{
// clean up any unmanaged references
}
}
* if you are not using DI for your factory, then you should instantiate the factory directly in the constructor instead of expecting it as an argument.
public class UnitTest1 : IClassFixture<MyTestFixture>
{
private readonly MyTestFixture _sharedContext;
public UnitTest1(MyTestFixture testFixture)
{
_sharedContext = testFixture;
}
[Fact]
public void Test1()
{
Assert.Null(_sharedContext.Value);
_sharedContext.Value = "smth";
}
[Fact]
public void Test2()
{
Assert.NotNull(_sharedContext.Value);
}
}
|
76381615 | 76381868 | I am trying to build a date range bar graph usins ggplot2 (R) in the spirit of:
I have followed a thread but I am completely unable to reproduce the results with dates.
If I understood it correctly, for each "id", the bar length is determined by the smallest and largest "value" in the database.
Here is a minimally working example of my data:
# Example dataframe
DF <- data.frame(Name = as.factor(c("1.Project1", "2.Project2", "3.Project3", "4.Project4")),
CreationTime = as.POSIXct(c("2019-12-10 13:22:20", "2019-12-17 12:25:48", "2020-01-02 13:02:57", "2020-01-14 08:37:10")),
LastActivity = as.POSIXct(c("2019-12-17 10:42:17 ", "2020-01-02 13:27:10", "2021-02-11 11:32:45", "2023-05-03 07:41:38")),
Status = as.factor(c("Prod", "Prod", "Dev", "Complete")))
# From wide to long
DFGather <- DF %>% tidyr::gather(key="Time", value="Value", 2:3)
# Generate plot
ggplot2::ggplot(DFGather, aes(x = Value, y = Name, fill = Status)) +
ggplot2::geom_col() +
ggplot2::coord_cartesian(xlim = c(min(DFGather$Value),max(DFGather$Value))) +
ggplot2::scale_x_datetime(date_breaks = "3 months", labels = scales::label_date_short())
I have also tried converting POSIXct dates to integers but it didn't change my output:
DFGather$Value <- as.integer(format(DFGather$Value,"%Y%m%d"))
Thanks for the support,
C.
| ggplot2: Date range bar graph | A quick and dirty approach using geom_segment.
ggplot2::ggplot(DF, ggplot2::aes(x = CreationTime, xend = LastActivity, y = Name, yend = Name, colour = Status)) +
ggplot2::geom_segment(linewidth = 15) +
ggplot2::coord_cartesian(xlim = c(min(DFGather$Value),max(DFGather$Value))) +
ggplot2::scale_x_datetime(date_breaks = "3 months", labels = scales::label_date_short())
Created on 2023-06-01 with reprex v2.0.2
|
76382858 | 76383322 | I am trying to cast multiple date formats from string type field using Pyspark. When I am using below date format it is working fine.
def custom_to_date(col):
formats = ("MM/dd/yyyy", "yyyy-MM-dd", "dd/MM/yyyy", "MM/yy","dd/M/yyyy")
return coalesce(*[to_date(col, f) for f in formats])
from pyspark.sql.functions import coalesce, to_date
df = spark.createDataFrame([(1, "01/22/2010"), (2, "2018-12-01")], ("id", "dt"))
df.withColumn("pdt", custom_to_date("dt")).show()
Above code gives the correct output.
But when I use the month in single digit as below, the code fails.
df = spark.createDataFrame([(1, "01/22/2010"), (2, "2018-12-1"),(3,"24/7/2006")], ("id", "dt"))
I got the below error message.
org.apache.spark.SparkException:
Job aborted due to stage failure:
Task 2 in stage 2.0 failed 4 times, most recent failure:
Lost task 2.3 in stage 2.0 (TID 10) (10.13.82.55 executor 0):
org.apache.spark.SparkUpgradeException:
[INCONSISTENT_BEHAVIOR_CROSS_VERSION.PARSE_DATETIME_BY_NEW_PARSER]
You may get a different result due to the upgrading to Spark >= 3.0:
| Spark is unable to handle a particular date format | Adding an answer since the comments and others answer doesn't cover the behaviour. The solution is not to add new formats. Since the formats itself can be better defined.
with spark 3.0 M supports 01, 1. January, Jan.
So you don't need MM
spark reference - https://spark.apache.org/docs/latest/sql-ref-datetime-pattern.html
def custom_to_date(col):
formats = ("M/d/yyyy", "yyyy-M-d", "d/M/y", "M/y")
return coalesce(*[to_date(col, f) for f in formats])
from pyspark.sql.functions import coalesce, to_date
df = spark.createDataFrame([(1, "01/22/2010"), (2, "2018-12-1"),(3,"12/2023")], ("id", "dt"))
df.withColumn("pdt", custom_to_date("dt")).show()
Results -
Alternatively, if you want legacy behavior then you can use
spark.conf.set("spark.sql.legacy.timeParserPolicy","LEGACY")
or
spark.sql("set spark.sql.legacy.timeParserPolicy=LEGACY")
|
76384850 | 76385187 | My line renderer is drawing behind objects. I want it to draw on top of other game objects except for the ball.
How can I do this?
See the following image to reference the problem (the line renderer draws below the goal, and I want it to draw itself on top.
I searched for the issue but haven't found a single answer for 3D.
| How to draw LineRenderer above other objects? | To render a material "above" some other materials, you must set your LineRenderer or TrailRenderer's material Rendering mode to Transparent.
Also, set the Rendering Mode of materials of objects you wish to draw LineRenderer on top to Transparent.
Now go back to the LineRenderer's material and in Advanced Options set its Render Queue to 3999. (higher than the object's materials)
Now your LineRenderer will be drawn on top.
|
76381796 | 76381874 | I got some problems with duplicate rows which I don't wanna get.
Hi!
I got two tables - tab1, tab2 and I want to join tab2 to tab1 like:
SELECT t1.column_A1, t2.column_B2
FROM tab1 t1
JOIN
tab2 t2
ON t1.column_A1=t2.column_A2
tab1
| Column A1 | Column B1 | Column C1 |
| -------- | -------- | -------- |
| Z1 | Cell 2 | Cell 3 |
| Z2 | Cell 5 | Cell 6 |
tab2
| Column A2 | Column B2 | Column C2 |
| -------- | -------- | -------- |
| Z1 | PW | Cell 3 |
| Z1 | RW | Cell 6 |
For some rows in tab1 there are more than 1 rows in tab2.
The result will be:
| Column A2 | Column B2 | Column C2 |
| -------- | -------- | -------- |
| Z1 | PW | RE |
| Z1 | RW | KS |
I want to get:
if PW - show only one row with PW;
if not PW - show only one row with RW
The result should be:
| Column A2 | Column B2 | Column C2 |
| -------- | -------- | -------- |
| Z1 | PW | RE |
| How to get not duplicate rows in join? | One option is to "sort" rows per each column_a1 by value stored in column_b2 and return rows that rank as the highest.
Sample data:
SQL> WITH
2 tab1 (column_a1, column_b1, column_c1)
3 AS
4 (SELECT 'Z1', 'cell 2', 'cell 3' FROM DUAL
5 UNION ALL
6 SELECT 'Z2', 'cell 5', 'cell 6' FROM DUAL),
7 tab2 (column_a2, column_b2, column_c2)
8 AS
9 (SELECT 'Z1', 'PW', 'cell 3' FROM DUAL
10 UNION ALL
11 SELECT 'Z1', 'RW', 'cell 6' FROM DUAL
12 UNION ALL
13 SELECT 'Z2', 'RW', 'cell 8' FROM DUAL),
Query begins here:
14 temp
15 AS
16 (SELECT t1.column_A1,
17 t2.column_B2,
18 ROW_NUMBER () OVER (PARTITION BY t1.column_a1 ORDER BY t2.column_b2) rn
19 FROM tab1 t1 JOIN tab2 t2 ON t1.column_A1 = t2.column_A2)
20 SELECT column_a1, column_b2
21 FROM temp
22 WHERE rn = 1;
COLUMN_A1 COLUMN_B2
------------ ------------
Z1 PW
Z2 RW
SQL>
|
76378880 | 76378926 | Trying to modify my axios wrapper and can't figure out why I'm getting this typescript error...
type Headers = {
Accept: string;
'Content-Type': string;
Authorization?: string;
}
export interface AxiosOptions {
params?: any;
data?: any;
headers: Headers;
}
const axiosOptions: AxiosOptions = {
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
},
};
export const updateAxiosOptionsHeaders = (
key: Pick<Headers, "Accept" | "Content-Type" | "Authorization">,
value: string
): void => {
axiosOptions.headers[key] = value;
}
| Pick error "Type 'Pick' cannot be used as an index type." | This doesn't really meet the usage pattern of Pick. You just need keyof Headers for your case:
type Headers = {
Accept: string;
'Content-Type': string;
Authorization?: string;
}
export interface AxiosOptions {
params?: any;
data?: any;
headers: Headers;
}
const axiosOptions: AxiosOptions = {
headers: {
Accept: 'application/json',
'Content-Type': 'application/json',
},
};
export const updateAxiosOptionsHeaders = (
key: keyof Headers,
value: string
): void => {
axiosOptions.headers[key] = value;
}
Playground link
Should you also have different value types in your object (which seems unlikely for a headers object), you can use a generic constraint:
export const updateAxiosOptionsHeaders = <K extends keyof Headers>(
key: K,
value: Headers[K]
): void => {
axiosOptions.headers[key] = value;
}
|
76378929 | 76378999 | What happens for an initial count of zero for an x86 rep prefix?
Intel's manual says explicitly it’s a while count != 0 loop with the test at the top, which is the sane expected behaviour.
But most of the many vague reports I’ve seen elsewhere suggest that there’s no initial test for zero so it would be like a countdown with a test at the end and so disaster if it’s repeat {… count —=1; } until count == 0; or who knows.
| x86 rep prefix with a count of zero: what happens? | Nothing happens with RCX=0; rep prefixes do check for zero first like the pseudocode says. (Unlike the loop instruction which is exactly like the bottom of a do{}while(--ecx), or a dec rcx/jnz but without affecting FLAGS.)
I think I've heard of this rarely being used as an idiom for a conditional load or store with rep lodsw or rep stosw with a count of 0 or 1, especially in the bad old days before cmov. (cmov is an unconditional load feeding an ALU select operation, so it needs a valid address, unlike rep lods with a count of zero.) This is not efficient especially for rep stos on modern x86 with Fast Strings microcode (P6 and later), especially without anything like Fast Short Rep-Movs (Ice Lake IIRC.)
The same applies for instructions that treat the prefixes as repz / repnz (cmps/scas) instead of unconditional rep (lods/stos/movs). Doing zero iterations means they leave FLAGS umodified.
If you want to check FLAGS after a repe/ne cmps/scas, you need to make sure the count was non-zero, or that FLAGS was already set such that you'll branch in a useful way for zero-length buffers. (Perhaps from xor-zeroing a register that you're going to want later.)
rep movs and rep stos have fast-strings microcode on CPUs since P6, but the startup overhead makes them rarely worth it, especially when sizes can be short and/or data might be misaligned. They're more useful in kernel code where you can't freely use XMM registers. Some recent CPUs like Ice Lake have fast-short-rep microcode that I think is supposed to reduce startup overhead for small counts.
repe/ne scas/cmps do not have fast-strings microcode on most CPUs, only on very recent CPUs like Sapphire Rapids and maybe Alder Lake P-cores. So they're quite slow, like one load per clock cycle (so 2 cycles per count for cmpsb/w/d/q) according to testing by https://agner.org/optimize/ and https://uops.info/.
What setup does REP do?
Why is this code using strlen heavily 6.5x slower with GCC optimizations enabled? - GCC -O1 used to use repne scasb to inline strlen. This is a disaster for long strings.
Which processors support "Fast Short REP CMPSB and SCASB" (very recent feature)
Enhanced REP MOVSB for memcpy - even without ERMSB, rep movs will use no-RFO stores for large sizes, similar to NT stores but not bypassing the cache. Good general Q&A about memory bandwidth considerations.
|
76383242 | 76383335 | Trying to match a dictionary item with a string value from another column.
sample data:
df = A B
0 'a' {'a': '2', 'b': '5'}
1 'c' {'a': '2', 'b': '16', 'c': '32'}
2 'a' {'a': '6', 'd': '23'}
3 'd' {'b': '4', 'd': '76'}
I'm trying to get the following out:
Df = A B
0 'a' {'a': '2'}
1 'c' {'c': '32'}
2 'a' {'a': '6'}
3 'd' {'d': '76'}
I got this far not inside a dataframe:
d = {k: v for k, v in my_dict.items() if k == 'a'}
for a single line, but I couldn't get this to work and to be fair, I didn't expect it to work directly, but was hoping i was close:
Test_df['B'] = {k: v for k, v in test_df['B'].items() if k == test_df['A']}
I get the following error:
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
What do I need to do to get this to work, or is there a better more efficient way?
| Dictionary Comprehension within pandas dataframe column | You can use a list comprehension with zip:
df['B'] = [{x: d[x]} for x, d in zip(df['A'], df['B'])]
Output:
A B
0 a {'a': '2'}
1 c {'c': '32'}
2 a {'a': '6'}
3 d {'d': '76'}
|
76384672 | 76385193 | I have a yaml file which is similar to the following (FYI: ssm_secrets can be an empty array):
rabbitmq:
repo_name: bitnami
namespace: rabbitmq
target_revision: 11.1.1
path: rabbitmq
values_file: charts/rabbitmq/values.yaml
ssm_secrets: []
app_name_1:
repo_name: repo_name_1
namespace: namespace_1
target_revision: target_revision_1
path: charts/path
values_file: values.yaml
ssm_secrets:
- name: name-dev-1
key: .env
ssm_path: ssm_path/dev
name-backend:
repo_name: repo_name_2
namespace: namespace_2
target_revision: target_revision_2
path: charts/name-backend
values_file: values.yaml
ssm_secrets:
- name: name-backend-app-dev
ssm_path: name-backend/app/dev
key: app.ini
- name: name-backend-abi-dev
ssm_path: name-backend/abi/dev
key: contractTokenABI.json
- name: name-backend-widget-dev
ssm_path: name-backend/widget/dev
key: name.ini
- name: name-abi-dev
ssm_path: name-abi/dev
key: name_1.json
- name: name-website-dev
ssm_path: name/website/dev
key: website.ini
- name: name-name-dev
ssm_path: name/name/dev
key: contract.ini
- name: name-key-dev
ssm_path: name-key/dev
key: name.pub
And using External Secrets and EKS Blueprints, I am trying to generate the yaml file necessary to create the secrets
resource "kubectl_manifest" "secret" {
for_each = toset(flatten([for service in var.secrets : service.ssm_secrets[*].ssm_path]))
yaml_body = <<YAML
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: ${replace(each.value, "/", "-")}
namespace: ${split("/", each.value)[0]}
spec:
refreshInterval: 30m
secretStoreRef:
name: ${local.cluster_secretstore_name}
kind: ClusterSecretStore
data:
- secretKey: .env
remoteRef:
key: ${each.value}
YAML
depends_on = [kubectl_manifest.cluster_secretstore, kubernetes_namespace_v1.namespaces]
}
The above works fine, but I also need to use the key value from the yaml into secretKey: <key_value from yaml>.
If I try with for_each = toset(flatten([for service in var.secrets : service.ssm_secrets[*]]))
resource "kubectl_manifest" "secret" {
for_each = toset(flatten([for service in var.secrets : service.ssm_secrets[*]]))
yaml_body = <<YAML
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: ${replace(each.value["ssm_path"], "/", "-")}
namespace: ${split("/", each.value["ssm_path"])[0]}
spec:
refreshInterval: 30m
secretStoreRef:
name: ${local.cluster_secretstore_name}
kind: ClusterSecretStore
data:
- secretKey: .env
remoteRef:
key: ${each.value["ssm_path"]}
YAML
depends_on = [kubectl_manifest.cluster_secretstore, kubernetes_namespace_v1.namespaces]
}
It just gives me the following error:
The given "for_each" argument value is unsuitable: "for_each" supports
maps and sets of strings, but you have provided a set containing type
object.
I have tried converting the variable into a map, used lookup, but it doesn't work.
Any help would be much appreciated.
Update 1:
As per @MattSchuchard suggestion, changing the for_each into
for_each = toset(flatten([for service in var.secrets : service.ssm_secrets]))
Gave the following error:
Error: Invalid for_each set argument
│
│ on ../../modules/02-plugins/external-secrets.tf line 58, in resource "kubectl_manifest" "secret":
│ 58: for_each = toset(flatten([for service in var.secrets : service.ssm_secrets]))
│ ├────────────────
│ │ var.secrets is object with 14 attributes
│
│ The given "for_each" argument value is unsuitable: "for_each" supports maps and sets of strings, but you have provided a set containing type object.
Update 2:
@mariux gave the perfect solution, but here is what I came up with. It's not that cleaner, but definitely works (PS: I myself am going to use Mariux's solution):
locals {
my_list = tolist(flatten([for service in var.secrets : service.ssm_secrets[*]]))
}
resource "kubectl_manifest" "secret" {
count = length(local.my_list)
yaml_body = <<YAML
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: ${replace(local.my_list[count.index]["ssm_path"], "/", "-")}
namespace: ${split("/", local.my_list[count.index]["ssm_path"])[0]}
spec:
refreshInterval: 30m
secretStoreRef:
name: ${local.cluster_secretstore_name}
kind: ClusterSecretStore
data:
- secretKey: ${local.my_list[count.index]["key"]}
remoteRef:
key: ${local.my_list[count.index]["ssm_path"]}
YAML
depends_on = [kubectl_manifest.cluster_secretstore, kubernetes_namespace_v1.namespaces]
}
| Terraform for_each over yaml file contents which is an object | Assumptions
Based on what you shared, i make the following assumptions:
the service is not actually important for you as you want to create external secrets by ssm_secrets.*.name using the given key and ssm_path attributes.
each name is globally unique for all services and never reused.
terraform hacks
Based on the assumptions you can create an array of ALL ssm_secrets using
locals {
ssm_secrets_all = flatten(values(var.secrets)[*].ssm_secrets)
}
and convert it to a map that can be used in for_each by keying the values by .name:
locals {
ssm_secrets_map = { for v in local.ssm_secrets_all : v.name => v }
}
Full (working) example
The example below works for me and makes some assumption where the variables should be used.
Using yamldecode to decode your original input into local.input
Using yamlencode to make reading the manifest easier and removing some string interpolcations. This also ensures that the indent is correct as we convert HCL to yaml.
A terraform init && terraform plan will plan to create the following resources:
kubectl_manifest.secret["name-abi-dev"] will be created
kubectl_manifest.secret["name-backend-abi-dev"] will be created
kubectl_manifest.secret["name-backend-app-dev"] will be created
kubectl_manifest.secret["name-backend-widget-dev"] will be created
kubectl_manifest.secret["name-dev-1"] will be created
kubectl_manifest.secret["name-key-dev"] will be created
kubectl_manifest.secret["name-name-dev"] will be created
kubectl_manifest.secret["name-website-dev"] will be created
locals {
# input = var.secrets
ssm_secrets_all = flatten(values(local.input)[*].ssm_secrets)
ssm_secrets_map = { for v in local.ssm_secrets_all : v.name => v }
cluster_secretstore_name = "not provided secretstore name"
}
resource "kubectl_manifest" "secret" {
for_each = local.ssm_secrets_map
yaml_body = yamlencode({
apiVersion = "external-secrets.io/v1beta1"
kind = "ExternalSecret"
metadata = {
name = replace(each.value.ssm_path, "/", "-")
namespace = split("/", each.value.ssm_path)[0]
}
spec = {
refreshInterval = "30m"
secretStoreRef = {
name = local.cluster_secretstore_name
kind = "ClusterSecretStore"
}
data = [
{
secretKey = ".env"
remoteRef = {
key = each.value.key
}
}
]
}
})
# not included dependencies
# depends_on = [kubectl_manifest.cluster_secretstore, kubernetes_namespace_v1.namespaces]
}
locals {
input = yamldecode(<<-EOF
rabbitmq:
repo_name: bitnami
namespace: rabbitmq
target_revision: 11.1.1
path: rabbitmq
values_file: charts/rabbitmq/values.yaml
ssm_secrets: []
app_name_1:
repo_name: repo_name_1
namespace: namespace_1
target_revision: target_revision_1
path: charts/path
values_file: values.yaml
ssm_secrets:
- name: name-dev-1
key: .env
ssm_path: ssm_path/dev
name-backend:
repo_name: repo_name_2
namespace: namespace_2
target_revision: target_revision_2
path: charts/name-backend
values_file: values.yaml
ssm_secrets:
- name: name-backend-app-dev
ssm_path: name-backend/app/dev
key: app.ini
- name: name-backend-abi-dev
ssm_path: name-backend/abi/dev
key: contractTokenABI.json
- name: name-backend-widget-dev
ssm_path: name-backend/widget/dev
key: name.ini
- name: name-abi-dev
ssm_path: name-abi/dev
key: name_1.json
- name: name-website-dev
ssm_path: name/website/dev
key: website.ini
- name: name-name-dev
ssm_path: name/name/dev
key: contract.ini
- name: name-key-dev
ssm_path: name-key/dev
key: name.pub
EOF
)
}
terraform {
required_version = "~> 1.0"
required_providers {
kubectl = {
source = "gavinbunney/kubectl"
version = "~> 1.7"
}
}
}
hint: you could also try to use the kubernetes_manifest resource instead of kubectl_manifest
p.s.: We created Terramate to make complex creation of Terraform code easier. But this seems perfectly fine for pure Terraform.
|
76380957 | 76381877 | How can I assert that the jest mocked module method was called?
E.g. in my .spec.js I have the following jest mocked module:
jest.mock('../../../../services/logs.service.js', () => ({
log: jest.fn()
}));
Now I would like to assert the log method. I.e. something like this:
//the log was called twice with the text "foo"
expect(log).toHaveBeenCalledWith(2, "foo");
But I can not access the log. I tried putting the log initialization outside the jest mocked module, like so:
const log = jest.fn();
jest.mock('../../../../services/logs.service.js', () => ({
log
}));
But I got the error:
The module factory of jest.mock() is not allowed to reference any out-of-scope variables.
| How can I assert that the jest mocked module method was called? | You can do the following:
JavaScript
import { log } from '../../../../services/logs.service.js';
jest.mock('../../../../services/logs.service.js', () => ({
log: jest.fn()
}));
expect(log).toHaveBeenCalledWith(2, "foo"); // JavaScript
TypeScript
import { log } from '../../../../services/logs.service.js';
jest.mock('../../../../services/logs.service.js', () => ({
log: jest.fn()
}));
const mockedLog = log as jest.MockedFunction<typeof log>;
expect(mockedLog).toHaveBeenCalledWith(2, "foo");
|
76382640 | 76383361 | I have a mobile app developed in ionic capacitor. The backend to the app is a .net core web api deployed on amazon elastic beanstalk. I am getting CORS error** No 'Access-Control-Allow-Origin' header is present on the requested resource** when trying to access the back end using the app.
I have attempted to allow the API to be accessible by any consumer by allowing all origins. Is there need to configure anything on AWS Elastic bean?
var app = builder.Build();
app.UseCors(builder => builder
.AllowAnyOrigin()
.AllowAnyMethod()
.AllowAnyHeader()
);
| Ionic app error: No 'Access-Control-Allow-Origin' header is present on the requested resource |
Log in to the AWS Management Console and navigate to the Elastic Beanstalk service.
Select your application and environment where the .NET Core Web API is deployed.
In the navigation pane, click on "Configuration."
Under the "Software" section, click on "Edit" for the "Environment properties."
Add a new property with the following details:
Name: ACCESS_CONTROL_ALLOW_ORIGIN
Value: * (or the specific origin you want to allow if you don't want to allow all origins)
Save the changes and wait for the environment to update.
Make sure to remove the CORS configuration you mentioned from your .NET Core Web API code, as the CORS handling will now be done by Elastic Beanstalk.
|
76384683 | 76385208 | I would like to draw random numbers from a modified exponential distribution:
p(x) = C * a * Exp[-(a*x)^b] with C=1/Gamma[1 + 1/b] for normalization.
How can I do this in julia? Unfortunately I have only little experience with Julia and no experiences with creating custom random numbers. I would be very grateful for any help.
| Draw random numbers from a custom probability density function in Julia | If I'm not mistaken, that is a p-Generalized Gaussian distribution, which has a rather efficient implementation in Distributions.jl:
using Distributions
mu = 0 # your location parameter
alpha = 1/a # your scale parameter
beta = b # your shape parameter
p = PGeneralizedGaussian(mu, alpha, beta)
Using the Distributions.jl API for univariate distributions, you can sample from this distribution by passing it to the exported rand method. Here is an example of how to sample five independent scalars from a PGeneralizedGaussian distribution with mu = 0, alpha = 1/2 and beta = 3:
julia> p = PGeneralizedGaussian(0, 1/2, 3);
julia> rand(p, 5)
5-element Vector{Float64}:
0.2835117212764108
-0.023160728370422268
0.3075395764050027
-0.19233721955795835
0.21256694763885342
If you want to try to implement the distribution yourself, which I do not recommend unless you are doing this as an exercise in programming math in Julia, you need to define a type that holds the static parameters of the distribution (in your case, just the shape parameter and the scale parameter), then define and export the methods listed here to extend the Distributions.jl API to your custom distribution. In particular, you need to define:
struct MyDistribution <: ContinuousUnivariateDistribution
# ... your distribution parameters here
end
rand(::AbstractRNG, d::MyDistribution) # sample a value from d
sampler(d::MyDistribution) # return d or something that can sample from d more efficiently
logpdf(d::MyDistribution, x::Real) # compute the log of the pdf of d at x
cdf(d::MyDistribution, x::Real) # compute the cdf of d at x
quantile(d::MyDistribution, q::Real) # compute the qth quantile of d
minimum(d::MyDistribution) # return the minimum value supported by d
maximum(d::MyDistribution) # return the maximum value supported by d
insupport(d::MyDistribution, x::Real) # query whether x is supported by d
The documentation of the package is very good, so it's an excellent way to get your feet wet if you are trying to learn Julia.
|
76385164 | 76385211 | For my main project, I'm trying to find a way to hide a column in JS. The following function :
function hide() {
const table = document.getElementById('test');
const cols = table.getElementsByTagName('col');
cols[1].style.visibility = "collapse";
}
works great, but the borders don't move. Here's the problem :
becomes
How can I avoid this issue ?
EDIT : This works as intended on Chrome and Edge. Is this a bug for Firefox?
Full html is:
function hide() {
const table = document.getElementById('test');
const cols = table.getElementsByTagName('col');
cols[1].style.visibility = "collapse";
}
table, tr, th, td {
border: 1px solid;
border-collapse: collapse;
}
<table id="test">
<colgroup>
<col><col><col>
</colgroup>
<tr>
<th>1</th>
<th>2</th>
<th>3</th>
</tr>
<tr>
<td>un</td>
<td>deux</td>
<td>trois</td>
</tr>
<tr>
<td>one</td>
<td>two</td>
<td>three</td>
</tr>
</table>
<button onclick=hide()>Hide</button>
| Border doesn't adapt after collapsing a column | To address this issue, you can use the border-spacing property instead of border-collapse. Modify your CSS as follows:
<style>
table {
border-spacing: 0;
}
th, td {
border: 1px solid;
padding: 5px;
}
</style>
|
76381785 | 76381889 | first I saw similar questions but nothing helped me.
I'm trying to sort list of tuples, and convert the data types inside the tuple,
convert it according to a list of tuples I get.
for example, if I have a list of tuple, every tuple is built like
(ID,Grade,Height)
A = [(123,23,67),(234,67,45)]
and I have a list of type like that:
[(ID,int),(grade,'s15'),(height,float)]
now I read that 's15' is a dtype from bumpy, but I can't seem to use it.
I tried to copy from the docs:
import numpy as np
dt = np.dtype(('>14'))
but all I get is this error:
dt = np.dtype(('>14'))
TypeError: data type '>' not understood
the docs I copied from:
https://numpy.org/doc/stable/reference/arrays.dtypes.html
and is there a generic converter I can use to convert to any type I'm given?
| TypeError: data type '>' not understood using dtype from numpy | I think you maybe overlooked the documentation you are referring.
You used
dt = np.dtype(('>14'))
which is >14 (fourteen)...
But in fact the documentation clearly mentions
dt = np.dtype('>i4')
which is i4 not 1 (one)
Also based on the docs > or < specifies upper/lower bound for each dtype, for example >i would be big endian integer (see Endianess)
And the number after that would indicate number of bytes given to the dtype (see docs)
Finally the S indicates Zero terminated bytes
Based on your description, your teacher wants Upper endian ~128 bit Zero terminated bytes
Furthermore,
dt = np.dtype(('>S15'))
works fine.
I hope this fixes your issue
|